#SalesforceAdmins

2026-01-29

Salesforce Spring ’26 Brings Major Debug Improvements to Flow Builder

If you’ve been building flows for any length of time, you already know this: a lot of the real work and time goes into debugging. It’s re-running the same automation over and over. Swapping out record IDs. Resetting input values. Clicking Debug, making a small change, saving, and sometimes starting the whole setup again. That loop is where Flow builders spend a lot of their time, especially once flows get even moderately complex.

Salesforce’s Spring ’26 release finally takes aim at that reality. Instead of piling on new features, this update focuses on removing friction from the debugging experience itself. The result is a Flow Builder that feels faster, less disruptive, and much closer to a modern development environment.

Debug Sessions That Don’t Forget Everything

One of the most impactful improvements in Spring ’26 is also one of the simplest: Flow Builder now remembers your debug configuration while you’re actively editing a flow. When you debug a flow, make a change, and save, Salesforce preserves the triggering record you used, your debug options, and your input variable values. That means no more losing your setup every time you click Save, no more re-pasting record IDs, and no more rebuilding your test scenario from scratch.

Your debug session stays intact until you refresh your browser, close Flow Builder, or manually click Reset Debug Settings. This is a big quality-of-life upgrade, especially if you work with record-triggered flows that have edge cases, complex decision logic, multi-screen flows with test data, or anything that requires several small iterations to get right. The practical impact is simple: you can now fix, save, and re-run flows much faster, without constantly breaking your momentum.

Flow Tests Are No Longer “Latest Version Only”

Spring ’26 also changes how flow tests work behind the scenes.

Previously, flow tests were tied only to the latest version of a flow. As soon as you created a new version, older tests were essentially left behind. If a test no longer applied, you deleted it. If it still applied, you recreated it. Now, tests can be associated with specific flow versions.

Source: https://help.salesforce.com/s/articleView?id=release-notes.rn_automate_flow_debug_test_versions.htm&release=260&type=5

You can now reuse the same test across multiple flow versions or limit it to only the versions it truly belongs to, and when you create a new version, Salesforce automatically carries those tests forward from the version you cloned. This gives you much tighter control over scenarios like preserving regression tests for older logic, maintaining multiple supported versions, validating breaking changes, and keeping historical test coverage intact. Instead of treating tests as disposable, they become part of your flow’s lifecycle. This is a foundational shift for teams building mission-critical automation.

Compare Screen Flow Versions to See What Changed

Salesforce has had version comparison in other areas of the platform, but Spring ’26 brings it to screen flows. You can now compare any two versions of a screen flow and instantly see what changed across elements, resources, fields, components, properties, and styles.

This makes it much easier to answer the first question most debugging starts with: what changed? Instead of manually opening versions side by side, you get a clear view of differences, helping you pinpoint where issues may have been introduced and focus your testing where it actually matters.

Source: https://help.salesforce.com/s/articleView?id=release-notes.rn_automate_flow_mgmt_compare_screen_flow_versions.htm&release=260&type=5

More Control When Debugging Approvals and Orchestrations

Debugging long approval chains or orchestrations has always been painful. You’d often have to run the entire thing just to test one step. Spring ’26 introduces several upgrades that make this far more surgical.

Complete work items directly in Flow Builder

You can now complete orchestration and approval work items without leaving Flow Builder.

While debugging, interactive steps can be opened directly on the canvas. Once completed, the orchestration or approval process resumes immediately.

This keeps the entire test cycle inside the builder instead of bouncing between apps, emails, and work queues.

Debug only the part you care about

You can now define a start point, an end point, or both when debugging orchestration and approval flows, which gives you much more control over what actually runs. Instead of being forced to execute the entire automation, you can skip earlier stages, stop before downstream logic, isolate a single phase, or focus on one problematic section. When you skip steps, you can also provide test inputs to simulate outputs from earlier stages. In other words, you no longer have to run the whole machine just to test one gear.

Selectively control which steps execute

Salesforce has expanded test output controls beyond rollback mode.

You can now decide which orchestration or approval steps should run while debugging, and which should be skipped, directly from the new Configure Test Output experience.

This makes it much easier to validate edge cases, exception handling, and conditional behavior without unnecessary noise.

Smarter Debugging for More Advanced Flow Types

Spring ’26 also delivers improvements for more specialized use cases.

Segment-Triggered Flows: Testing multiple records at once

For segment-triggered flows, you can now debug up to ten records at the same time instead of testing one record after another. You can select multiple segment members, run the debugger, and cycle through each result to see exactly how different records move through your flow.

The canvas highlights the active path for the selected record, and you can filter results by successes or failures, making it much easier to spot inconsistencies. This is especially useful when validating logic across different customer types, messy or incomplete data, and edge cases that would normally take many separate test runs to uncover.

Why This Release Actually Matters

It’s easy to skim release notes and see “debug improvements” as minor polish, but debugging speed directly affects how confidently people build automation, how complex flows can realistically become, how quickly teams can ship fixes, and how much risk is involved in every change.

With these changes, you can rerun the same scenarios without constantly rebuilding your debug setup, test individual flow versions with far more precision, and isolate only the parts of your logic you actually care about. You can walk through approvals and orchestrations directly inside Flow Builder instead of jumping between tools, and even validate how a flow behaves across multiple records in a single debug run. This is the kind of release that changes how Flow Builder feels to use.

Conclusion

Salesforce has spent the last few releases expanding what Flow can do, and Spring ’26 is about improving how Flow is built. Persistent debug sessions, version-aware tests, selective execution, in-builder work items, and targeted debugging all point in the same direction. Flow Builder is evolving from a configuration tool into a true development environment.

If you build anything non-trivial in Flow, these changes will save you time immediately. And if you teach, support, or scale Flow across teams, they open the door to far better testing practices going forward.

Explore related content:

Top Spring ’26 Salesforce Flow Features

Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger

Spring ’26 Release Notes: Highlights for Admins and Developers

#FlowBuilder #LowCode #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials
Cartoon-style illustration of a Flow builder working on a laptop at a desk in a cozy home office, representing debugging and building Salesforce automation.Salesforce Flow Builder ‘Select Flow Versions’ panel displaying selected flow versions (Version 1, Version 2, and Version 3) for a flow test.Salesforce Flow Builder Compare Versions view highlighting added, updated, and removed elements between two screen flow versions with ‘Show Only Changed Items’ turned on.Salesforce Flow Builder Compare Versions view highlighting added, updated, and removed elements between two screen flow versions with ‘Show Only Changed Items’ turned on.
2025-12-30

Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger

Flow builders, rejoice! Now with the Spring 26 Release you can trigger your flow automations on ContentDocument and ContentVersion Flow objects for Files and Attachments. Salesforce had delivered a new event type in the previous release that supported flow triggers for standard object files and attachments. The functionality was limited. In this release, Salesforce gave us the ability to trigger on all new files/attachments and their updates for all objects.

Use case: When a document is uploaded to a custom object with lookups to other objects like contact and account, add links to these objects, so that the same file is visible and listed under the related lists.

You could easily expand this use case to add additional sharing to the uploaded file, which is also a common pain point in many organizations. I will leave out this use case for now which you can easily explore by expanding the functionality of this flow.

Objects that are involved when you upload a file

In Salesforce, three objects work together to manage files: ContentDocument, ContentVersion and ContentDocumentLink.

Think of them as a hierarchy that separates the file record, the actual data, and the location where it is shared. The definition for these three core objects are:

ContentDocument: Represents the “shell” or the permanent ID of a file. It doesn’t store the data itself but acts as a parent container that remains constant even if you upload new versions.
ContentVersion: This is where the actual file data (the “meat”) lives. Every time you upload a new version of a file, a new ContentVersion record is created. It tracks the size, extension, and the binary data.
ContentDocumentLink: This is a junction object that links a file to other records (like an Account, Opportunity, or Case) or users. It defines who can see the file and what their permissions are.

Object Relationships:

The relationship is structured to allow for version control and many-to-many sharing:
ContentDocument > ContentVersion: One-to-Many. One document can have many versions, but only one is the “Latest Published Version.
ContentDocument > ContentDocumentLink: One-to-Many. One document can be linked to many different records or users simultaneously.

ContentDocumentLink is a junction object that does not allow duplicates. If you attempt to create the relationship between a linked entity and the content document when it already exists, your attempt will fail.

What happens when a file is uploaded to the files related list under an object?

Salesforce creates the ContentDocument and ContentVersion records. Salesforce will also create the necessary ContentDocumentLink records; often one for the object record relationship, one for the user who uploaded the file.

For each new file (not a new version of the same file) a new ContentDocument record will be created. You can trigger your automation based on this record being created, and then create additional ContentDocumentLink records to expand relationships and sharing.

Building Blocks of the Content Document Triggered Automation

For this use case I used a custom object named Staging Record which has dedicated fields for Contact and Account (both lookups). This method of uploading new documents and updating new field values to a custom record is often used when dealing with integrations and digital experience users. You can easily build a similar automation if a ContentDocumentLink for the Account needs to be created when the file is uploaded to a standard object like Contact.

Follow these steps to build your flow:

  1. Trigger your record-triggered flow when a ContentDocument record is created (no criteria)
  2. Add a scheduled path to your flow and set it up to execute with 0 min delay. Under advanced settings, set up the batch size as 1. Async seems to work, as well. I will explain the reason for this at the end of the post.
  3. Get all ContentDocumentLink records for the ContentDocument
  4. Check null for the get in the previous step (may not be necessary, but for good measure)
  5. If not null, use a collection filter to filter for all records where the LinkedEntity Id starts with the prefix of your custom object record (I pasted the 3 character prefix into a constant and referenced it). Here is the formula I used: LEFT({!currentItem_Filter_Staging.LinkedEntityId},3)= {!ObjectPrefixConstant}
  6. Loop through the filtered records. There should be only one max. You have to loop, because the collection filter element creates a collection as an output even for one record.
  7. Inside the loop, get the staging record. I know, it is a get inside the loop, but this will execute once. You can add a counter and a decision to execute it only in the first iteration if you want.
  8. Build two ContentDocumentLink records using an assignment. One between the ContentDocument and the Contact on the staging record, the other one between the ContentDocument and the Account. You could add additional records here for sharing.
  9. Add your ContentDocumentLink records to a collection.
  10. Exit the loop and create the ContentDocumentLink records using the collection you built in one shot.

Here is a screenshot of the resulting flow.

Here is what happens when you create a staging record and upload a file to Salesforce using the related list under this record.

Here is the resulting view on the Contact and Account records.

Why is the Scheduled Path or Async Path Necessary?

When a file is uploaded, a ContentDocument record and a ContenDocumentVersion record are created. The junction object for the ContentDocumentLink record will need to be created after these records are created, because the relationship is established by populating these Ids on this record. When you build the automation on the immediate path, your get will not find the ContentDocumentLink record. To ensure Salesforce flow can find the record, use either async path or scheduled path.

When you build the automation on the immediate path, the ContentDocumentLink records are not created. You don’t receive a fault email, either, although the automation runs well in debug mode. I wanted to observe this behavior in detail, and therefore I set up a user trace to log the steps involved. This is the message I have found that is stopping the flow from executing:
(248995872)|FLOW_BULK_ELEMENT_NOT_SUPPORTED|FlowRecordLookup|Get_Contact_Document_Links|ContentDocumentLink
According to this the get step for ContentDocumentLink records cannot be bulkified, and therefore the flow cannot execute. Flow engine attempts to always bulkify gets. There is nothing fancy about the get criteria here. What must give us trouble is the unique nature of the ContentDocumentLink object.

The async path seems to bypass this issue. However, if you want to ensure this element is never executed in bulk, the better approach is to use a scheduled path with zero delay and set the batch size to one record in advanced settings. I have communicated this message to the product team.

Please note that the scheduled path takes a minute to execute in my preview org. Be patient and check back if you don’t initially see the new ContentDocumentLink records.

Conclusion

In the past, handling file uploads gave flow builders a lot of trouble, because the related objects did not support flow triggers.

Now that we have this functionality rolling out in the latest release, the opportunities are pretty much limitless. The functionality still has its quirks as you can see above.

I would recommend that you set up a custom metadata kill switch for this automation so that it can easily be turned off for bulk upload scenarios.

Watch the video on our YouTube channel.

[youtube youtube.com/watch?v=Gl0XCtMAhm]

Explore related content:

Top Spring 26 Salesforce Flow Features

Should You Use Fault Paths in Salesforce Flows?

How to Use Custom Metadata Types in Flow

See the Spring 26 Release Notes HERE.

#Automation #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #Spring26 #UseCases
Add Salesforce Files and Attachments to Multiple Related Lists When UploadedFiles Content Objects DiagramContent Document Triggered FlowUpload Files
2025-12-16

Top Spring ’26 Salesforce Flow Features

What are the new features about? Spring 26 brings new screen, usability and platform enhancement features. Let’s dive into the details.

Top Screen Flow Spring 26 Features

It seems like most of the new features involve screen flows.

I will not go into further detail, but this release introduces yet another file upload component for screen flows: LWR File Upload Component for Experience Cloud.

Here are the rest of the screen flow improvements.

Screen Flow Screen Element and Component Style Enhancements

Screen flow screen element gets features that allow you do set the background, text and border colors. Border weight and radius can be adjusted. For input components, in-focus color for text can be differentiated. Flow buttons also get similar adjustments gaining the ability to change colors on hover over.

Any styling changes you set override your org or Experience Cloud site’s default theme.

Remember to keep your color and contrast choices in check for accessibility. Don’t do it as I did below. Go to the WebAIM contrast checker website and plug in your color codes to check whether their contrast is sufficient for accessibility.

Screen Flow Message Element

Screen Flow Message Element leverages the new styling options to display a message on the screen. It has a pulldown that allows you to create an information, success, warning or an error message. These come with standard color sets, which will direct flow developers in using a standard visual language.

This functionality is compliant with A11y for accessibility.

See all the four types on the same screen below.

Screen Flow Kanban Component (Beta)

The new Kanban component allows you to organize records into cards and columns. This is particularly useful for visualizing process phases and managing transitions across your workflow.

Use the new Kanban Board component to show records as cards in columns that represent workflow stages, without custom Lightning implementations. The Kanban Board is read-only, so users can’t drag cards between stages at run time.

Data Table Column Sort and Row Value Edit (TBD)

Now the user can sort the data table by columns and edit text fields in rows. This feature is not available in the preview orgs. The product team is working hard in the background to make this into the Spring 26 release. This functionality is slated to make it to the release at the last minute.

Preview Files Natively in Screen Flows

Elevate document-based processes by enabling your users to review file content directly within a screen flow. The new File Preview screen component removes the requirement to download files externally, ensuring easier document review and approval workflows.

This component seems to be already in production.

Open Screen Flows in Lightning Experience with a URL

Previously, when you opened a flow via URL, it did not launch in lightning experience. Now, it will launch in lightning preserving the experience your user is used to especially when they are working on a customized lightning console app.

I will quote the release notes for this one.

“To open a flow in Lightning Experience, append /lightning/flow/YourFlowNameHere to your URL. To run a specific flow version, append /lightning/flow/YourFlowNameHere/versionId to your URL. Flows that open in Lightning Experience have improved performance because most required Lightning components are already loaded into the browser session. In Lightning console apps, your tabs are preserved when a flow opens, and you can switch to other tabs while the flow is working. Using the new URL format also ensures that your browser behaves consistently, with forward, back, and your browser history working as expected.

To pass data into a flow through its URL, append ?flow__variableIdHere=value to the end of your URL. For example, to pass a case number into a flow, /lightning/flow/YourFlowNameHere?flow__variableIdHereID={!Case.CaseNumber}.

Use & to append multiple variables into a flow. For example, /lightning/flow/YourFlowNameHere?flow__varUserFirst={!$User.FirstName}&flow__varUserLast={!$User.LastName} passes both the user first name and last name into the flow.”

Usability and Platform Features

I listed all of the screen flow features above. The following two items are huge usability improvements that also involves screen management for the flow canvas, not just only for screen flows.

Collapse and Expand Decision and Loop Elements

When your flow gets to big and you need to Marie Kondo (tidy up) your flow canvas, you can collapse the decision and loop elements that take up a lot of real estate. You can always expand them back when needed.

Now you can collapse and expand branching elements with Flow Builder, including Wait, Decision, Loop, Path Experiment, and Async Actions, helping you focus on the key parts of your flow.

This layout is saved automatically and locally in your browser, making it easier to return to your work without changing the view for other users.

Mouse, Trackpad and Keyboard Scroll

Now you don’t have to drag or use the scroll bar to move the flow around on the flow canvas. You can use vertical and horizontal wheels on your mouse, the arrows keys on your keyboard or your trackpad if you have one.

No need to use Salesforce Inspector Reloaded to get this functionality any more. Thanks to Salesforce Inspector Relaoded for filling the gap in the mean time.

Content Document and Content Version Flow Triggers for Files and Attachments (Beta)

Salesforce delivered a new event type in the last release that could trigger flows for standard object files and attachments. The functionality was limited. In this release, Salesforce gave us the ability to trigger on all new files/attachments and their updates for all objects.

I was told by the product team that this functionality will be released as beta.

Flow Logging

I am not exactly sure what has been improved here. Salesforce had previously announced additional flow logging capabilities leveraging Data Cloud. Now, a new flow logging tab has been added to the Automation Lightning App.

Debug Improvements

The debug in the flow builder will now remember the record that it ran on and the updated field value if it is running in an update scenario. Debug inputs such as triggering record values, debug options, and input variable values now remain set when you save flow changes within your Flow Builder session. The user will need to click a reset button to disassociate the debug run from the input for the last run. This change is intended to make debug reruns faster.

Flow builder will preserve debug configurations when you save changes to your flow. Refreshing your browser or closing Flow Builder clears all debug settings.

Conclusion

Salesforce product teams work hard delivering new features for every release. Spring 26 release brings significant new improvements for the flow builder. I would have liked to see additional capabilities coming for flow types other than screen flows. This release seems to be a lighter release in that area.

Additional bonus features include request for approval component for lightning page layouts (highly-requested feature), compare screen flow versions, and associating flow tests with flow versions.

The release notes are still in preview. And we could still have new functionalities removed or added in the release cycle.

This post will be updated as additional details are made available.

[youtube youtube.com/watch?v=eZC_8W1IbU]

Explore related content:

Salesforce Optimizer Is Retired: Meet Org Check

One Simple Salesforce Flow Hack That Will Change Your Workflow Forever!

Automate Permissions in Salesforce with User Access Policies

Spring ’26 Release Notes: Highlights for Admins and Developers

​​​​What Is Vibe Coding? And What’s New in Agentforce Vibes for Developers?

#Kanban #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #SalesforceUpdate #ScreenFlow #Spring26
A Spring 26 trailhead sign points down a trailScreen Style Enhancements FlowMessage ComponentKanban Component
2025-11-11

Should You Use Fault Paths in Salesforce Flows?

If you build enough Flows, you’ll eventually see the dreaded flow fault email. Maybe a record you tried to update was locked, a required field value was not set in a create operation, or a validation rule tripped your commit. Regardless of the root cause, the impact on your users is the same: confusion, broken trust, and a support ticket. The good news is you can catch your faults using the fault path functionality. In this post, we’ll walk through practical patterns for fault handling, show how and when to use custom error element, and explain why a dedicated error screen in screen flows is worth the extra minute to build. We’ll also touch on the roll back records element for screen flows where this functionality can make a difference.

Why Fault Paths Matter

Faults are opportunities for your Salesforce Org automation to improve. While unhandled faults are almost always trouble, handled faults do not have to be a huge pain in our necks.

The Core Building Blocks of Flow Fault Handling

1) Fault paths
Gets (SOQLs), DMLs (create, update, and deletes) and actions support fault paths. Fault paths provide a way for the developer to determine what to do in the event of an error.
2) Fault actions
You can add elements to your fault path to determine the next steps. You can also add a custom error element in record-triggered flows or error screens in screen flows for user interactivity. Multiple fault paths in the flow can be connected to the same element executing the same logic. A subflow can be used to standardize and maintain the fault actions such as temporarily logging the fault events.

Logging Errors

Here is a list of data that may be important to include in your fault communications and logging:

  • Flow label
  • User Name
  • Date/Time
  • Technical details (e.g. $Flow.FaultMessage)
  • Record Id(s) and business context (e.g., Opportunity Id, Stage)
  • User-friendly message (plain English)

Subflow Solution

The advantage of a subflow when dealing with fault paths is that you can modify the logic once on a central location. If you want to start logging temporarily, you can do that without modifying tons of flows. If you want to stop logging, this change can be completed fairly easily, as well.

Inside the subflow, decide whether to:

  • Log to a custom object (e.g., Flow_Error__c)
  • Notify admins via Email/Slack

Meet the Custom Error Element

The Custom Error element in Salesforce Flow is a powerful yet often underutilized tool that allows administrators and developers to implement robust error handling and create more user-friendly experiences. Unlike system-generated errors that can be cryptic or technical, the Custom Error element gives you complete control over when to halt flow execution and what message to display to your users.

The Custom Error element lets you intentionally raise a validation-style error from inside your flow, without causing a system fault, so you can keep users on the same screen, highlight what needs fixing, and block navigation until it’s resolved. Think of it as flow-native inline validation.

What The Custom Error Element Does

It displays a message at a specific location (the entire screen or a specific field) and stops the user from moving forward. This functionality does present a less than ideal self-disappearing red banner message if you make a change to a picklist using the path component, though. Refrain from using the custom error messages in these situations.

The unique thing about the custom error message is that it can be used to throw an intentional exception to stop the user from proceeding. In these use cases, it works very similarly to a validation rule on the object.

This becomes particularly valuable in complex business processes where you need to validate data against specific business rules that can’t be easily captured in standard validation rules. For instance, you might use a Custom Error to prevent a case from being closed if certain required child records haven’t been created, or to stop an approval process if budget thresholds are exceeded.

Please note that custom error messages block the transaction from executing, while a fault path connected to any other element will allow the original DML (the triggering DML) to complete when the record-triggered automation is failing.

Custom Error Screen in Screen Flows

Incorporating a dedicated custom error screen in your screen flows dramatically improves the user experience by transforming potentially frustrating dead-ends into helpful, actionable moments. When users encounter an error in a screen flow without a custom error screen, they’re often left with generic system messages that don’t explain what went wrong in business terms or what they should do next, leading to confusion, repeated help desk tickets, and abandoned processes.

A well-designed custom error screen, however, allows you to explain the specific issue in plain language that resonates with your users’ understanding of the business process. Beyond clear messaging, custom error screens give you the opportunity to provide contextual guidance, such as directing users to the right person or department for exceptions, offering alternative paths forward, or explaining the underlying business rule that triggered the error. You can also leverage display text components with dynamic merge fields to show users what caused the problem turning the error into a learning moment rather than a roadblock. Additionally, custom error screens maintain your organization’s branding and tone of voice, include helpful links to documentation or knowledge articles, and pair with logging actions to give you valuable insights into potential process improvements or additional training needs.

Here is an example custom error screen element format (customize to your liking):

Error
Your transaction has not been completed successfully. Everything has been rolled back. Please try again or contact your admin with the detailed information below.
Account Id: {!recordId}
Time and Date: {!$Flow.CurrentDateTime}
User: {!$User.Username}
System fault message:
{!$Flow.FaultMessage}
Flow Label: Account - XPR - Opportunity Task Error Screen Flow

The “Roll Back Records” Element

There are use cases in screen flows where you create a record and then update this record based on follow-up screen actions. You could be creating related records for a newly created record, which would require you to create the parent record to get the record Id first. If you experience a fault in your screen flow, record(s) can remain in your system that are not usable. In these situations the Roll Back Records element lets you undo database changes made earlier in the same transaction. Roll Back Records does not roll back all changes to its original state, it only rolls back the last transaction in a series of transactions.

Tips for fewer faults in the first place

Here are some practical tips:

  • Validate early on screens with input rules (Required, min/max, regex).
  • Use Decisions to catch known conflicts before DML.
  • Place DMLs strategically in screen flows: Near the end so success is all-or-nothing (plus Roll Back Records if needed) or after each screen to record the progress without loss.

The fewer faults you surface, the more your users will trust your flows.

Putting it all together

Here’s a checklist you can apply to your next Screen Flow:

  • Every DML/Callout element has a Fault connector.
  • A reusable Fault Handler subflow logs & standardizes messages.
  • Custom Error is used for predictable, user-fixable issues on screens.
  • A custom error screen presents clear actions and preserves inputs.
  • Technical details are available, not imposed (display only if helpful).
  • Roll Back Records is used when it matters.
  • Prevention first: validate and decide before you write.

Other Considerations

When you use a fault path on a record-triggered flow create element, and your create fails, please keep in mind that you will get a partial commit. This means the records that fail won’t be created while others may be created.

Example: You are creating three tasks in a case record-triggered flow. If one of your record field assignments writes a string longer than the text field’s max length (for example, Subject) and you use a fault path on that create element, one task fails while the other two create successfully.

Conclusion

My philosophy regarding fault paths is to add them to your flows, but never go down them if possible. When you see you are going down fault paths, then that means you have opportunity for improvement in your automation design.

Every fault you handle offers insight into how your flow behaves in the real world. Each one reveals something about the assumptions built into your automation, the data quality in your org, or the user experience you’ve designed. Treating faults as signals rather than setbacks helps you evolve your automations into resilient, reliable tools your users can trust. Over time, these lessons refine both your technical build patterns and your understanding of how people interact with automation inside Salesforce.

Explore related content:

How to Use a Salesforce Action Button to Validate Lookup Fields in Screen Flows

Should You Leave Unused Input and Output Flow Variables?

How To Build Inline Editing for Screen Flow Data Tables in Salesforce

Salesforce Flow Best Practices

Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger

#CustomErrors #FaultHandling #FaultPath #SalesforceAdmins #SalesforceDevelopers #SalesforceHowTo #SalesforceTutorials #ScreenFlows
Should You Use Fault Paths in Salesforce Flows?Flow Error Screen 2Flow Error Screen Design
2025-10-21

Top 7 Key Takeaways from Salesforce Dreamforce 2025

Salesforce Break reviewed the press releases and sessions coming out of Salesforce Dreamforce 2025, and prepared the ket takeaways in this post, so you don’t have to go thorough all the materials.

The biggest announcements for Salesforce at Dreamforce 2025 were centered on advancing the company’s foundational “Agentic Enterprise” vision through enhanced control, deeper context integration, and widespread collaboration tools.

The announced functionalities were more evolutionary than revolutionary.

Here is the list.

Top 7 Key Dreamforce Takeaways

1. Agentforce 360 Platform, New Agentforce Builder, and Agent Script

The cornerstone announcement was the launch of Agentforce 360, the latest version of the comprehensive platform designed to unify AI, trust, and data capabilities across all Salesforce products. Salesforce has completely reimagined the entire Customer 360 platform as Agentforce 360, ensuring that every app is now “agentic”. This platform emphasizes providing users with more control than ever over their AI systems. To make development accessible to a wider audience, including line-of-business leaders, IT teams, service, and sales teams, a brand new AgentForce Builder was introduced, featuring a radically simplified, clean, and beautiful interface built from the ground up.

This capability is powered by Agent Script, a new scripting language that exposes the reasoning engine and allows users to define deterministic chaining actions and conditional logic. Agent Script unlocks patterns needed for mission-critical use cases, blending fluid agentic reasoning with the certainty of rules-based control in a unified instruction set to ensure agents are predictable and stay “on track,” preventing costly unpredictability. Agent Script can be built at the topic level where previous LLM-based non-deterministic functionality produced unpredictable results.

In addition, Salesforce announced Slack as its future conversational interface. Several sessions demonstrated deeper integrations in action. Another major change of course was the ability to use external LLMs for the Atlas Reasoning Engine. I believe this demonstrates that Salesforce is positioning Agentforce more as an orchestrator and collaborator of agents and AI capabilities rather than competing to become the agent for the enterprise.

2. Agentforce Voice

Agentforce Voice extends the power of the Agentforce platform by allowing agents to talk, bringing AI capabilities directly to contact centers and 800 numbers. Businesses can now configure the voice, tone, and personality of the AI right inside the AgentForce Builder. The goal is to deliver a unified customer experience across all channels, providing a highly human-like and interruptible conversational flow. A critical feature of Agentforce Voice is ensuring a seamless transition when the AI needs to transfer a customer to a human agent; the human representative automatically receives the full transcript and context of the AI conversation, allowing them to pick up the experience precisely where the AI left off. This functionality is available GA October ’25.

3. Intelligent Context Processing (Data 360)

Intelligent Context Processing tackles one of the greatest challenges for AI agents: understanding and utilizing vast amounts of complex, unstructured data. Agents often struggle with content in rich formats, pictures, tables, and existing workflows, the accumulated wisdom of the company. These new tools, built into Data 360, interpret and index this data by analyzing and parsing complex content (such as product manuals containing charts and images). This allows agents to pull in the exact, correct context required to deliver accurate and rich responses at the precise moment it is needed.

Furthermore, Data 360 enhances governance across both structured and unstructured data. Using natural language, administrators can create policies, such as masking internal FedEx employee contact details within agent responses, ensuring the information provided is not only accurate but also appropriate for the customer. It is not clear to us whether this is solely a rename of the product called Data Cloud. It seems that way.

4. Agentforce Vibes

Salesforce launched Agentforce Vibes as a new product that lets trailblazers quickly and easily build apps, dashboards, automations, and flows. Users achieve this via vie coding, which involves providing a simple, natural language description of what they want the platform to build. The core innovation of Agentforce Vibes is its deep contextual understanding; it speaks “the language of the business,” including the organization’s data, relationships, customers, products, and security permissions. This contextual intelligence allows Agentforce Vibes to rapidly translate a descriptive idea into deployable, production-grade Salesforce metadata (such as a screen flow). This drastically reduces development time, saving what could amount to dozens of manual clicks inside a traditional flow builder. This effectively elevates the capabilities of every developer. Interesting tidbits: Developers can develop using the coding language of their choice, and there is a local LWC preview function that will be launched soon.

5. Slackbot

Salesforce unveiled Slackbot as a new personalized, AI-powered companion that boosts productivity directly within Slack. It will launch for General Availability (GA) in January and draws on each user’s unique organizational context, including conversations, files, workflows, and apps. The tool moves users beyond asking simple questions toward achieving complex, tangible outcomes. For example, a user can ask Slackbot to handle a multi-step process with one command. It can review deal status, find compliance files, and draft a customer email in the user’s tone. Slackbot can also create a prep document and calendar invite for key stakeholders automatically. Slackbot will be the home of AI capabilities within Slack, even for customers who don’t use Salesforce.

6. Support for Third-Party Agents in Slack (AgentExchange)

Salesforce affirmed its vision of Slack becoming the “home for all AI” by announcing support for all third-party AI agents, such as ChatGPT, Claude and Gemini. This transformation positions Slack as an agentic operating system where external agents can exist as collaborative “teammates” alongside human employees. To ensure these external agents can perform sophisticated reasoning, they are grounded in the company’s real-time knowledge and context via a real-time search API and an MCP server. This initiative allows Salesforce agents to work in conjunction with agents from other platforms. This coupled with the AI-assisted enterprise search capabilities of Slack empowers Slack admins and users to be more productive.

7. Agentforce Observability

Agentforce Observability was introduced to help monitor and scale digital work in the new agentic enterprise era. It serves as one control center for managers to monitor and improve agent team performance. The tool gives leaders visibility into KPIs like escalation and deflection rates using Tableau Next analytics.

Most importantly, it features Agent Insights, which acts as a performance review by scoring every single agent session. This scoring helps managers find and analyze poor-performing conversations to uncover root causes like process issues. It enables tuning of agent prompts and behaviors for consistent results. This management layer is essential since prompts and loops alone aren’t enough.

This was a major pain point with the clients. I am happy Salesforce is addressing it with this new functionality that will be available for most clients.

Conclusion

I personally found the announcements more evolutionary than revolutionary. It was not a strong Dreamforce in terms of new functionalities covered.

Adoption challenges and cleanup are still needed to make current products appealing. These announcements mark real progress for Salesforce.

[youtube youtube.com/watch?v=_S18LAXcBY]

Explore related content:

Salesforce Ushers in the Age of the Agentic Enterprise at Dreamforce 2025

Dreamforce 2025: Standout Sessions Streaming on Salesforce+

Salesforce Winter ’26 Release: Comprehensive Overview of New Flow Features

#AgentScript #Agentforce #Agentforce360 #AgentforceBuilder #Data360 #Dreamforce #NewRelease #Salesforce #SalesforceAdmins #SalesforceDevelopers
Top 7 Key Takeaways from Salesforce Dreamforce 2025Agentforce 360Agentforce Builder Agent ScriptAgentforce Voice
2025-10-02

How to Quickly Build a Salesforce-Native Satisfaction Survey Using SurveyVista

SurveyVista by Ardira is a Salesforce native survey solution that allows you to design, distribute, and analyze surveys directly within your Salesforce org. Unlike external survey tools that require complex integrations or third-party data syncs, SurveyVista keeps everything in-platform. This gives admins and business users a secure, streamlined way to capture feedback without leaving Salesforce.

🚨 Use case: Build a satisfaction survey to measure CSAT and NPS, accept free-form responses in addition to scores, and attach them to records in Salesforce for visibility, action and reporting purposes.

Why Salesforce-Native Matters

Many survey tools rely on connectors, middleware, or APIs to bring data back into Salesforce. While this approach works, it introduces several challenges. Data leaving Salesforce and traveling across external systems creates additional security risks. It also increases integration overhead by requiring ongoing maintenance, troubleshooting, and vendor updates. On top of that, responses may not be available in real time inside Salesforce, which can slow down reporting and automation.

SurveyVista avoids these issues because it is 100% Salesforce native. Data never leaves your org and remains protected under the Salesforce trust framework, giving you stronger security. Responses are available instantly, making them immediately usable for reporting, flows, and automation. Since no external integration is required, admin overhead is reduced and your tech stack stays simple.

SurveyVista Install and Preparation

SurveyVista is an AppExchange solution. You can head over to the AppExchange and install the free/trial version of SurveyVista in your Org. Get it HERE.

Once you install the AppExchange package you can go to the lightning page and finish up your configuration there. The required steps are fairly simple and they relate to publishing a digital experience site where the surveys will be hosted. There are a few steps that require you to copy and paste code into the Developer Console and execute them. You should also check on the digital experience builder whether your digital experience site requires login or not. If you are going to host the survey publicly and accept anonymous responses, then your digital experience site needs to be made public.

You will also find on this page an option to download templates and examples. I find the template that includes all UI components very useful, because it quickly shows you what is possible.

You can start your survey from scratch or from a template.

Build

I decided to build a 5-question CSAT and NPS form. One question will accept the NPS score, while the last question will accept free-form text for open feedback.

The form structure is as follows:

Customer Satisfaction Survey

Q1. How satisfied are you with your overall experience?
Very satisfied / Satisfied / Neutral / Dissatisfied / Very dissatisfied
Q2. How would you rate the quality of our product/service?
Excellent / Good / Fair / Poor
Q3. How likely are you to recommend us to a friend or colleague?
NPS scale (0-10)


Q4. How responsive have we been to your questions or concerns?
Extremely responsive / Very responsive / Somewhat responsive / Not so responsive / Not at all responsive


Q5. Please share any additional feedback or suggestions you may have.
Paragraph (free-form text)

SurveyVista offers ready components for you to add these inputs on your form. The customization options seem virtually limitless. Branding your survey is easy.

You can also customize your “Thank You” landing page and provide links on that page, as well.

Once you complete your design, you add the digital experience site to your survey and publish it. SurveyVista produces two links for your Survey form. One can be used for internal users, the other one for external users. You can send this link to your audience anytime on any channel, either manually or automatically.

Result

Here is the resulting form.

The beauty of SurveyVista is that the response is recorded in your Salesforce Org as an object record. You can trigger automation when the record is created, and relate this record to any record(s) you want in your Salesforce Org.

You can use the reports and dashboards SurveyVista package gives you, or set up your own reports and dashboards in Salesforce. In addition to relating to records, you can use response mapping features to automate creating and/or updating Salesforce standard or custom object records.

Overview of SurveyVista Features and Use Cases

SurveyVista includes a survey builder that lives entirely in Salesforce, allowing you to create surveys with customizable questions, logic, and branding. Responses are stored directly in Salesforce records through a native data model, eliminating the need for external syncs or integrations.
Because the tool is built on Salesforce, responses can trigger Flows, Approvals, or Processes automatically. You can also analyze results using standard Salesforce Reports and Dashboards, and distribute surveys securely through Salesforce email, Experience Cloud, or custom links.

One important note is that SurveyVista can handle both authenticated and unauthenticated respondents. If you want to collect responses from external participants who do not have a Salesforce login, you can do so through public or personalized links. For authenticated external respondents, such as community users who log in through a Salesforce Digital Experience site, additional Salesforce licensing may be required.

Use Cases:

  • Customer Satisfaction (CSAT) and NPS: Gather customer insights after key interactions.
  • Employee Feedback: Collect internal survey responses securely.
  • Training Assessments: Get immediate feedback from attendees.
  • Operational checklists: Inspection checklists guiding the inspector to complete a list of tasks.
  • Custom Business Processes: Build forms and capture input tied directly to Salesforce records.

Why Choose SurveyVista?

If your team values security, speed, and simplicity, SurveyVista gives you a native first alternative to tools like Qualtrics or SurveyMonkey. Because everything lives in Salesforce, you avoid integration headaches and keep sensitive data where it belongs, under your org’s security umbrella.

SurveyVista keeps all survey responses inside Salesforce, giving you real time insights that combine feedback data with your existing customer CRM data, so you can take immediate action without waiting on integrations or external syncs.

SurveyVista Pricing: What It Costs and What You Get

SurveyVista is priced on an annual, org-wide basis, with plans starting at US $2,999 per year for smaller organizations. This gives you full access to a Salesforce-native survey solution without the overhead of integrating an external system.

There is also a Free Edition that includes core survey builder functionality. The free version comes with certain limitations, such as restrictions on how respondents access the survey, but it is a good way to explore the product and test it out inside your Salesforce environment.

Paid tiers begin at around $2,999 per year and scale up depending on your organization’s size and requirements. Larger organizations or those needing more advanced features can expect higher-tier plans in the range of $5,499 per year or more. For enterprise needs, Ardira offers custom pricing tailored to the scope of your surveys and the scale of your Salesforce org.

SurveyVista also supports a free trial of its paid tiers, so you can evaluate the tool before committing. See more pricing details on their website HERE.

Conclusion

SurveyVista makes collecting and acting on feedback simple, secure, and Salesforce native. Whether you’re measuring customer satisfaction, running employee surveys, or embedding forms into business processes, everything stays inside your org, where it’s accessible in real time, protected by Salesforce security, and ready to power automation. With flexible pricing, a free edition to get started, and an intuitive builder that lives in Salesforce, SurveyVista is an accessible solution for any team that wants actionable insights without integration headaches. Try it today at the Ardira website to see how easily you can bring surveys into Salesforce!

This post was sponsored by SurveyVista by Ardira.

#Adrira #AppExchange #SaleforceTutorials #Salesforce #SalesforceAdmins #SalesforceDevelopers #SurveyVista

Salesforce cloud above the SurveyVista logo, with hands clapping.SurveyVista Create ScreenSurveyVista NPSSurveyVista Responsive
2025-09-09

One Simple Salesforce Flow Hack That Will Change Your Workflow Forever!

What if I told you that the Flow you’ve been building could secretly hold the key to a much bigger impact in the automation world? A world where you don’t rebuild logic over and over… where one Flow powers multiple flows.

Sounds dramatic, right? But once you learn this trick, it will be an invaluable addition to your flow arsenal that will superpower your workflows going forward.

Use case: Create a follow-up task due in seven days for the proposal step when the stage is updated to proposal (do the same on create), if there is no existing open task already with the same subject.

Let’s start by building this use case. Then we will get to the hack part.

Step 1. Build the Original Record-Triggered Flow

We’ll start with something simple: a record-triggered Flow on Opportunity that creates a Task when the Opportunity hits a certain stage. Check whether there is an open task already with the same subject related to the opportunity, before creating another one. If there is an open task already, skip the create.

  • Trigger: Opportunity → when Stage = “Proposal/Quote”
  • Action: Create Task → Assigned to Opportunity Owner
  • Due date: 7 days from the current date
  • WhatId (Related to Id) set as the triggering Opportunity

Straightforward.

But here’s the catch: this logic lives in a record-triggered flow. What if I wanted to leverage the task creation logic for multiple record-triggered flows (including scheduled paths), schedule-triggered flows and possibly for screen flows, as well. In addition, could I leverage the same flow for other object records in addition to opportunities? Good food for thought.

Step 2. Save As an Autolaunched Flow

Here’s where the hack begins.

From the Flow Builder menu, click Save As → choose A New FlowAutolaunched (No Trigger).

Now we have the same logic, but free from the record trigger.

Step 3. Replace $Record With Input Variables

The Autolaunched Flow still references $Record from the Opportunity. That won’t work anymore. Time to swap those out. The references are listed under Errors. The flow cannot be saved until these Errors are fixed.

  • Create Input Variables for everything your logic needs; e.g., recordId (WhatId), OwnerUserIdVar, DelayInDaysVar.

    • Update your Create Task, Get Task elements and the Due Date formula to reference those input variables instead of the $Record.

    Boom. Your Flow is now a Subflow – it can take in data from anywhere and run its magic.

    Step 4. Refactor the Original Record-Triggered Flow

    Time to circle back to the original record-triggered Flow.

    • Open the Flow, Save As a New Version.

    • Delete all the elements. (Yes, all. Feels risky, but trust me.)

    • Add a Subflow element.

    • Select your new Autolaunched Flow.

    • Map the input variables to $Record fields, and provide the delay in days parameter value.

    Now, instead of directly creating the Task, your record-triggered Flow just hands $Record data to the Subflow – which does the real work.

    Here is how the debug runs works.

    Why This Hack Changes Everything

    This one move unlocks a whole new way of thinking about Flows:

    • Reusability – Logic built once, used anywhere.

    • Maintainability – Update the Subflow, and every Flow that calls it stays consistent.

    • Scalability – Build a library of Subflows and assemble them like Lego pieces.

    • Testing Ease – Some flow types are hard to test. Your autolaunched subflow takes in all the necessary parameters in the debug mode, and rolls back or commits the changes based on your preference.

    Suddenly, your automation isn’t a patchwork of disconnected Flows – it’s a modular, scalable system.

    The Secret’s Out

    I call this the “Save As Subflow” hack. It’s hiding in plain sight, but most builders never use it. Once you do, your workflow will never be the same.

    Remember, you can make your subflow logic as flexible as you want. You can add input variables for subject and description. This would make your task creation even more flexible so that it can be used for other objects like Case and Custom objects.

    Try it today – and the next time you find yourself rebuilding logic, remember: you don’t have to. Just save it, strip $Record, add input variables, and let your Subflows do the heavy lifting.

    Explore related content:

    Automate Permissions in Salesforce with User Access Policies

    When Your DMLs Have Criteria Conditions Other Than Id

    Display Product and Price Book Entry Fields in the Same Flow Data Table

    How to Use a Salesforce Action Button to Validate Lookup Fields in Screen Flows

    #Hack #HowTo #RecordTriggered #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #Subflow

    The Only Salesforce Subflow Hack You’ll Ever NeedTask Create Record Triggered FlowSave As A New FlowGive Your Flow Label
    2025-08-19

    Should You Leave Unused Input and Output Flow Variables?

    In Salesforce Flow, input variables are special placeholders that allow data to be passed into a flow from an external source, such as a Lightning page, a button, another flow, or even an Apex class, so that the flow can use that data during its execution. When you create an input variable in Flow Builder, you mark it as Available for Input, which makes it visible and ready to receive values from outside the flow. Output variables, on the other hand, are used to send data out of a flow so it can be consumed by whatever triggered or called the flow, such as another flow, a Lightning web component, or an Apex class. When you create a variable and mark it as Available for Output, the flow can pass its final or intermediate values back to the caller once it finishes running.

    Input variables are especially useful for building modular, reusable flows. You can design them to handle different scenarios based on the values provided at runtime. For example, a record ID provided as an input variable can help the flow retrieve and update that specific record without needing user input. By leveraging input variables, you can keep flows flexible, reduce duplication, and make them easier to maintain.

    Similarly, output variables are powerful when building modular, subflow-based solutions. The parent flow can feed inputs to the subflow, receive outputs in return, and then continue processing without extra queries or logic. For example, a subflow might calculate a discount amount or generate a new record ID. It can then return it as an output variable for the parent flow to use. Output variables make flows more reusable, keep processes streamlined, and allow different automation components to share data seamlessly.

    Security Implications of Variables Available for Input and Output

    In programming, a variable’s scope defines the region of code where it exists and can be used, such as within a specific method, a class, or an entire module. For example, a variable defined inside a method is local to that method and cannot be seen or changed by code outside it, much like keeping notes in your own locked desk drawer. This “privacy” ensures that internal details remain protected from unintended interference, which is a key aspect of encapsulation in programming. If you want other parts of the program to access the data, you must explicitly expose it through return values, public properties, parameters, or other controlled interfaces. This principle not only prevents accidental bugs but also supports security. Sensitive data and logic remain inaccessible unless intentionally shared, helping keep the system stable, predictable, and easier to maintain.

    When you allow input variables for your flow, you allow external environments that run this flow to pass parameters into it. This potentially makes your flow vulnerable to outside attacks. When you configure output variables for your flow, you are creating a risk of external environments accessing flow output data. This is often data recorded in your Salesforce org. This data may include personally identifiable information or sensitive data.

    In addition, avoid using inputs that are easy to guess. If you look up a contact record based on their email address, attackers may guess the email address after a few tries (firstname.lastname@gmail.com for example).

    What About Flows Built for Digital Experience Guest Users?

    When you build a flow and deploy it on a digital experience site, where the guest user can execute it without logging in, you are exposing your flow to the outside world. This scenario makes your flow even more vulnerable to outside attacks.

    Guest User Means Anybody Can Access Any Time

    First of all, please know that this is a very risky approach. You should assume anybody can run that flow anytime, which is what you allowed. Make sure that only limited inputs and outputs are defined and used. The flow should only execute a limited scope that it absolutely needs. You should not allow the flow to perform a multitude of operations because you aim for flexibility. Test many scenarios to ensure attacks can not derail your flow and trick it to perform operations that it is not intended to perform.

    Limit the Data

    Furthermore, you should not allow the flow to access any information it does not need to see. If you are dealing with records or record collections, make sure your gets specify fields that are absolutely necessary. Do not get the drivers license number for the contact when you just need the name. In this scenario, do not let Salesforce automatically decide what fields to get. Also, when performing updates, do not update all the field values on the record. Just update whichever field is important to update for your process.

    Isolate the Elevated Functionality

    Finally, you may be tempted to set your flow to run in system context without sharing, or to allow a guest user to view records in the org through sharing rules. Both scenarios introduce additional risks that must be carefully considered.

    When allowing your automation to run in system context without sharing, isolate the necessary part into a subflow. Ensure that logic is tightened well from a security standpoint. Do not run the whole flow in system context without sharing mode. Just run the necessary part in a subflow using this elevated setting.

    Screen Flows and Reactivity

    Whether you allow elevated access or not, screen flows present a couple of inherited risks.

    When you pass information to a data table, lightning web component or a screen action, that information is accessed by your browser locally. If you feed a collection of contact records to a datatable and get all field values before you go to the data table screen, the local browser will see all the field values on the record. This happens before the user interacts with the table. The user can see these values.

    Recent developments of reactivity for screen flows are fantastic from a UI standpoint, but further complicate the security risks. The more reactive functionality you use in your flow, the more data you handle locally in your browser.

    Conclusion

    When flow builders, especially new starters, build flow variables, they often freely check available for input and available for output checkboxes. They do this thinking the alternative would limit them. This is risky and not necessary. You can change these settings at any time without having to create or recreate variables.

    Always plan your inputs and outputs carefully and review them at the end of development. Make sure you don’t have any unused variables still accepting inputs or producing outputs.

    In this era, where we hear the Salesforce name associated with client data security breach incidents, apply extreme security caution when dealing with automation.

    This post is part of our Flow Best Practice series. See the other posts HERE.

    Sources and references:

    Building Secure Screen Flows For External User Access by Adam White

    Data Safety When Running Screen and Autolaunched Flows in System Context – Salesforce Help

    Explore related content:

    How To Attach Files Using the Flow Email Action in Salesforce

    Getting Started with Salesforce Data Cloud: Your Roadmap to Unified Customer Insights

    How To Build Flex and Field Generation Prompt Templates in the Prompt Builder

    #Apex #BestPractices #InputVariables #LowCode #OutputVariables #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #ScreenFlow #Security

    Should You Leave Unused Input and Output Flow Variables?
    2025-08-12

    Why Is Everyone Talking About Salesforce Flow Approvals?

    In the Spring ’25 release, Salesforce introduced Flow Approvals to replace the legacy approval processes. This approval platform was based on the orchestration functionality. I recorded and released two videos and posts to share this functionality on Salesforce Break. The videos saw great interest from the community, they are about to reach 20K views soon. So, why is everyone talking about flow approvals?

    There are multiple reasons:

    1. Flow approvals are orchestration-based, but they are entirely free unlike other orchestrations.
    2. Legacy approvals are really old. Salesforce has not been investing in them. They are past due for a remake.
    3. Legacy approvals are limited. To enhance the functionality, clients had to use AppExchange solutions or paid alternatives by Salesforce like advanced approvals for CPQ.
    4. Flow approvals allow for parallel approvals, dynamic steps, and flexibility in the approval process.

    This is why I decided to create more content in this area, starting with:

    1. A live course that teaches Flow Approval processes in depth, with hands-on practice. See the details here, and reach out if you’re interested.
    2. Additional resources focused on solutions that bridge the gaps between Flow Approvals and Legacy Approvals, addressing the limitations of the new platform.

    Here is the first post detailing a solution filling one of the gaps.

    Flow Approvals Don’t Provide Sufficient Detail In The Related Lists

    Here is the first point I would like to address: Flow approvals don’t provide good detailed information in the related lists of the object record like the legacy approvals did.

    Solution: Build a screen flow with reactive data tables to show the approval submission records and their related records. Add the screen flow to a tab on the record page.

    Salesforce provided a component that can be added to the record page. It is called the Approval Trace component. This provides some information about the approval process, but is not customizable. I asked myself how I can go beyond that, and decided to build a reactive screen flow with data tables to fill this functionality gap. Here is what the output looks like:

    To build and deploy this flow, you need to follow these steps:

    1. Build the screen flow.
    2. Build the autolaunched flow that will fetch the data you will need. This flow will be used as the screen action in step one.
    3. After testing and activation, add the screen flow to the record page.

    If you have never built a screen flow with screen actions before, let me be the first one to tell you that step one and two are not really completed in sequence. You go back and forth building these two flows.

    Let’s get started.

    Build the Flow Approval Submission Screen Flow

    What I usually do, when building these flows is that I first get the screen flow started. Then I build the autolaunched flow, and go back to the screen flow to build out the rest of the functionality. The reason is that the screen flow data tables will need the outputs from the autolaunched flow to be fully configured.

    This is what the screen flow looks like, once it is completed.

    For now, you can just ignore the loop section. This is there to ensure that there is a default selection for the first data table, when the flow first runs.

    This is the structure of the flow excluding that part:

    1. Get all approval submission records for the recordId that will be provided as input into the flow.
    2. Check if there are approval submissions found.
    3. Display a screen saying “no records were found,” if the get returns null.
    4. Display a reactive screen mainly consisting of three data tables with conditional visibility calling an autolaunched flow as a screen action.

    Here is what this screen looks like:

    After you build, test, and activate the autolaunched flow, configure the screen action under the screen properties as shown below.

    How the Loop Section Works

    The first data table has an input parameter that determines the default selection, when the flow first runs. This is a record variable representing one of the members of the collection record variable that supplies the data. You need to loop the collection of records to get to the record variable. Follow these steps:

    1. Loop the collection record variable which is the output of your get step. Sort the data by last modified date in your get step.
    2. Assign the first member to a record variable.
    3. Exit the loop without condition. Connect the path to the next element outside the loop.
    4. Add the resulting record variable to the default selection parameter under the configure rows section of your data table.

    This loop always runs once, setting the default selection to the most recent approval submission. This populates the related data tables when the flow first runs.

    Build the Screen Action Autolaunched Flow for Related Lists

    The autolaunched flow receives a single approval submission recordId as input. Then it gets the related records and data the screen flow needs, and returns the data as output.

    Here is a screenshot of the autolaunched flow.

    This flow executes the following steps:

    1. Gets the approval submission data.
    2. Gets the user data for the submitter to resolve the full name.
    3. Gets approval work items.
    4. Checks null and sets a boolean (checkbox) variable when the get returns null. The output uses this variable to control conditional visibility of the relevant data table. If found this method yields the best results.
    5. Get approval submission details.
    6. Checks null and sets a boolean variable when the get returns null. This variable is then used in the output to drive conditional visibility of the relevant data table.
    7. Assigns the get results to output collection record variables.

    Final Deployment Steps

    After testing and activating the autolaunched flow, you need to add the flow to the screen flow as the screen action. The flow input will be fed from the selection of the first data table. You will see that this step will make all the outputs of the autolaunched flow available for the screen flow. Using these outputs build the additional two data tables and configure the conditional visibility.

    After testing and activating your screen flow, add the flow to the record page on a dedicated new tab (or to a section on an existing tab). Select the checkbox to pass the recordId to the flow. Note that this flow will work with any record for any object.

    Limitations and Suggested Improvements

    While this screen flow provides a lot of detail and customization options it has two limitations:

    1. By default, the data table does not resolve and display record names in lookup fields when you add these fields as columns. To address this, I added the submitter’s full name in a read-only text field for display on the screen. Workaround: Create formula fields on the object and display those in the data table.
    2. The data tables do not provide a clickable link. Combined with the limitation above, you can create a formula field on the object to address both of these gaps: show the record name and make it a clickable link. Here is the formula example you need for this (shout out goes to Brad Weller for his contribution): HYPERLINK("/" & Id, Name, '_self')

    While I wanted to make these additions to these flows, I did not want to add custom fields to the objects. It should be your decision whether you want to do that or not.

    Install the Package to Your Dev Org

    Here is the second generation unprotected package for these two flows that you can install in your Dev Org:

    Install the Unprotected 2GP

    For a more visual walk through of how these flows are built, watch the Salesforce Break YouTube video below.

    With Salesforce phasing out legacy approvals, mastering Flow Approvals is essential to keep your org’s processes modern, flexible, and future-ready. Gain the confidence to handle any approval challenge with solutions that work seamlessly in real-world Salesforce environments HERE.

    Explore related content:

    Supercharge Your Approvals with Salesforce Flow Approval Processes

    When Your DMLs Have Criteria Conditions Other Than Id

    Start Autolaunched Flow Approvals From A Button

    Get Ready for the New Time Data Type – Summer ‘25 Flow Goodness

    #AutolaunchedFlow #FlowApprovals #FlowBuilder #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials

    Why Is Everyone Talking About Salesforce Flow Approvals?Flow Approval Reactive Screen FlowReactive Screen Action
    2025-08-04

    Simplify Salesforce Integrations with Declarative Webhooks

    Salesforce continues to invest in tools that simplify integration tasks for administrators. Low-code set up for integrations are possible on the Salesforce platform today. However, the functionality still seems to be dispersed all over the platform utilizing several tools, and keeping the difficulty of setup relatively high. This is where Declarative Webhooks comes in. This platform makes inbound and outbound integrations easy, and keeps all your configurations together in one single app.

    What is Declarative Webhooks?

    Declarative Webhooks is a native Salesforce application developed by Omnitoria. It allows admins to build significant, scalable integrations with external platforms without writing code. Built to work with REST APIs that use JSON or x-www-form-urlencoded data, the app makes it possible to configure both outbound and inbound connections from within Salesforce. It’s ideal for admins, developers, and operations teams looking to connect Salesforce to third-party tools quickly and securely.

    Declarative Webhooks currently holds a 5-star rating on the AppExchange, with positive feedback from users across industries.

    Key Declarative Webhooks Features

    Declarative Webhooks enables bidirectional integrations. You can send data out of Salesforce (outbound) by triggering callouts through Flow, Process Builder, Apex, custom buttons, or scheduled batches. You can also receive data from external systems (inbound) by defining endpoints within Salesforce that respond to external webhooks.

    Unlike standard Salesforce tools, Declarative Webhooks actually creates and hosts inbound URLs—eliminating the need for middleware, and enabling real-time sync with external systems directly from your org.

    The interface is entirely point-and-click, making setup approachable even for non-developers. The app includes template-based configurations that streamline implementation without the need for custom Apex. Help and guidance is provided throughout the UI each step of the way.

    Security and flexibility are top priorities. Declarative Webhooks supports a variety of authentication methods, including OAuth and Basic Authentication, and allows you to configure secure handling of credentials and external tokens.

    For more advanced use cases, the app includes features like retry logic, callout sequences, and detailed error handling. You can tailor integrations to your needs using scheduling tools or triggering logic from inside Salesforce.

    Real-World Use Cases

    Slack Webhook

    Simple use case: Trigger Slack actions via Slack workflows from Salesforce – Send a message to a channel and add a name to a Slack list.

    Now granted, this can also be achieved with Salesforce-Slack actions, however, I wanted to take this opportunity to trigger Slack workflows with Webhooks, and demo the Declarative Webhooks functionality with a simple use case.

    I set up a Slack workflow that triggers based on a webhook. This workflow posts a message to a channel and adds the name of the person that is passed into the workflow via the webhook to a list of contacts.

    You can see the configuration of the Slack workflow and the Slack output results below.

    How Did I Configure Declarative Webhooks to Achieve This Result?

    First you need to install Declarative Webhooks from Salesforce AppExchange. I will give you the link further down on this post. This app is free to install and try.

    • Complete the Slack configuration of the workflow. Slack will give you a webhook URL.
    • Configure Declarative Webhooks and add the URL to the configuration page. Make sure you add the domain URL to Salesforce Remote Site Settings.

    • Test and activate your callout
    • Use one of the many methods available in Declarative Webhooks to trigger the callout from Salesforce.

    Inbound Call Template for Zoho Campaigns

    Use case: Zoho Campaigns can generate a webhook callout when a contact unsubscribes from an email list. When a contact unsubscribes from the list, make a callout to Salesforce and activate the Do Not Call checkbox on the contact.

    How Did I Configure Declarative Webhooks and Zoho Campaigns to Achieve This Outcome?

    • Set up an Inbound Call Template on Declarative Webhooks. The magic on this platform is that it generates an external endpoint URL for you. You can chose to authenticate or not.

    • Create a webhook on the Zoho Campaign side and pass the Name and Email of the contact to Salesforce. Enter the URL generated by Declarative Webhooks here.

    • Build an autolaunched flow to update the checkbox on the matching record.

    • Test and activate your flow and Declarative Webhooks template.
    • Unsubscribe the contact from the list on the Zoho Campaigns side and see the magic unfold.

    I really liked this functionality. The logs show whether the flow executed successfully. For future enhancements, I would like for the Declarative Webhooks logs to also show output variable values coming from the flow.

    Pricing Overview

    Declarative Webhooks is free to install and use in Salesforce sandbox environments indefinitely. In a production or developer org, users get a 30-day free trial. After that, the app remains free for basic use, up to 100 inbound and 100 outbound calls per month, using one outbound and one inbound template.

    For organizations that need more capacity or advanced functionality, paid plans are available. These plans scale with usage and support additional templates, retries, and enhanced features. Nonprofit discounts are available, making the app accessible to mission-driven organizations.

    Follow this link to find out more about the product and try it yourself.

    Why Declarative Webhooks?

    This app removes the need for manual data entry and reduces the likelihood of human error. It lets teams centralize their business operations within Salesforce, replacing disconnected workflows with streamlined automations. Whether you’re connecting to popular SaaS tools or custom-built systems, Declarative Webhooks empowers teams of all skill levels to build reliable integrations that scale with their business.

    How to Get Started

    You can install Declarative Webhooks directly from the AppExchange. The installation process is quick, and the setup guide walks you through each step. Start experimenting in a sandbox or production trial, and configure your first outbound or inbound connection using the built-in templates. Whether you’re an admin looking to eliminate duplicate entries or a developer needing a fast integration framework, this tool provides the support you need to get started quickly. 

    Final Thoughts

    I liked how Declarative Webhooks brought various integration methods together in one app. I especially like the inbound call functionality. Ease of setup, flexible pricing, and native integration with Salesforce automation tools are attractive features for Salesforce Admins. If you are in the market for integration solutions, I recommend you check out Declarative Webhooks by Omnitoria here.

    This post was sponsored by Omnitoria.

    Explore related content:

    Getting Started with Salesforce Data Cloud: Your Roadmap to Unified Customer Insights

    How To Use Custom Permissions In Salesforce Flow

    Create Document Zip Archives in Salesforce Flow

    Dynamically Create Documents Using PDF Butler

    #DeclarativeWebhooks #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials

    Simplify Salesforce Integrations with Declarative WebhooksSlack Workflow Triggered by WebhookSlack Webhook MessageSlack List Add Via Webhook
    2025-07-30

    When Your DMLs Have Criteria Conditions Other Than Id

    The Update Records element in Salesforce Flow is a powerful tool that allows you to modify existing records without writing any code. It’s commonly used to change field values and update statuses. You can configure it to update a specific record (like a record from the trigger or a record you’ve retrieved in a prior element), or you can set conditions to update multiple records that meet certain criteria. Best practice is to keep your updates efficient. Limit the number of records updated when possible, and always ensure that your flow logic avoids unnecessary updates to prevent hitting governor limits or creating infinite loops. Use it thoughtfully to streamline processes and maintain clean, accurate data.

    Update Records

    When you update records, there are three ways you can configure the update element:

    1. Update using Id(s): Your update element can point to one record Id or multiple record Ids using the IN operator when executing the update. This is an efficient alternative, as the record(s) are uniquely identified. This alternative consumes one DML against your governor limit.
    2. Update using a collection: This method is efficient, because the update element always consumes one DML against your governor limit, regardless of how many records your are updating in one show. You can update up to 10K records in one update element.
    3. Update using criteria conditions for field values other than Id: When updating multiple records, we can also set conditions and update all the records that meet the conditions. In this case, Salesforce queries the database and gets the records that will be updated, and performs the update. This method therefore consumes one SOQL and one DML against your governor limit. It is possible that one or no record meets the conditions, as well.

    Update Using Criteria Conditions For Field Values Other Than Id

    Let’s expand on the last method. For an inactive account, you may want to update all open cases to closed status. In a flow we could configure the update element with the following conditions:

    • AccountId = Inactive Account
    • Closed = false (case status is not closed)

    And for these accounts the field update that will be performed is as follows:

    Status = Closed (set status to closed)

    In this scenario, what Salesforce will do is query and find the records using the two conditions listed above (SOQL) and set the Status field on these records to Closed (DML).

    Now, is this a bad thing? Not necessarily. This is a little known fact, that you should keep in mind when optimizing your flow for governor limit usage.

    What is the alternative? I guess you could perform an update using one of the other alternatives listed above. Let’s look at these alternatives in detail:

    Update Using Id(s)

    If you wanted to use this method you could get the records according to the criteria conditions, and extract the Ids and put them into a text collection using the transform element, and do the update using the IN element. This alternative is more complicated. It does not bring any efficiencies.

    Update Using a Collection

    You could get a collection of records using the conditions, loop through each item to update the case status, or possibly use the transform element to update the status in one shot – depending on your use case – then go to update using the processed collection. Too complicated. This alternative still uses one SOQL and one DML.

    Conclusion

    Updates that include conditions beyond specifying the Id of the record consume one SOQL and one DML against your execution governor limits; make sure you check and control your governor limit usage.

    Explore related content:

    Salesforce Flow Best Practices

    Flow Naming Convention Tips

    Can You Start With a Decision Inside Your Record-Triggered Flow?

    How Many Flows Per Object?

    #Automation #DML #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #UpdateElement

    Two people talking about Salesforce and the Update Element for FlowUpdate Records With ConditionsUpdate Records With IN
    2025-07-16

    Getting Started with Salesforce Data Cloud: Your Roadmap to Unified Customer Insights

    It’s not uncommon for businesses to lose track of their customers when data lives in too many places. Data scattered across various systems, from CRM and marketing automation to e-commerce platforms and mobile apps, creates “data silos” that hinder a complete understanding of customer behavior and preferences. This leads to misleading metrics, redundant communications, and missed opportunities for truly personalized engagement. This is where Salesforce Data Cloud steps in, offering a solution to connect, harmonize, and activate all your customer data, transforming it into actionable insights.

    Evolving from Salesforce CDP (Customer Data Platform) and formerly known as Genie, Salesforce Data Cloud is designed to create a unified picture for your customer. It enables you to bring together data from any source, regardless of its format, using low-code tools and advanced architectural foundations like the lakehouse architecture and Hyperforce. The ultimate goal is not just data aggregation, but also to empower every part of your organization, from marketing and sales to service and commerce, with real-time, intelligent actions.

    This guide will walk you through the essential phases of getting started with Salesforce Data Cloud.

    Why Data Cloud? The Core Problem It Solves

    The primary challenge Salesforce Data Cloud addresses is the elimination of data silos. Imagine a customer interacting with your brand through multiple touchpoints: they browse your website, sign up for a newsletter, make a purchase through your e-commerce platform, and contact customer service. Each interaction generates data, but this data often resides in separate systems each managed by different teams or individuals. Without a unified view, you might send generic emails, offer irrelevant products, or even annoy customers with redundant communications because you don’t recognize them as the same individual across all these systems.

    Data Cloud provides a unified picture by ingesting data from diverse sources, including Salesforce CRM, Marketing Cloud, Commerce Cloud, Amazon S3, Google Cloud Storage, Azure, Workday, and SAP, using a rich library of pre-built connectors or flexible APIs. This consolidation is crucial for building unified customer profiles that represent a complete, 360-degree view of each individual, avoiding misleading metrics and improving personalization.

    Beyond just collection, Data Cloud is built to make data actionable. It enables you to perform transformations and aggregations to generate calculated insights (e.g., Customer Lifetime Value, engagement scores), segment your audience with precision, and trigger real-time actions across various channels. Its architecture, based on a lakehouse model on Hyperforce, supports high-volume data ingestion and processing at the metadata level, ensuring efficiency and scalability.

    It’s also important to note Data Cloud’s consumption-based pricing model, where you pay only for the services you use, making efficient data management even more critical. Despite the improvements made over the recent years, the estimation of Data Cloud costs remains to be a challenge.

    Phase 1: Planning and Discovery – Laying the Groundwork

    Any successful Data Cloud implementation begins with a meticulous planning and discovery phase. This foundational step ensures alignment with business goals and prepares the ground for effective data management. Data Cloud is a platform where most of the time of the implementation needs to be spent on preparation and design. Expediting these phases can be costly causing rework and frustration.

    Define Business Objectives and Use Cases

    Before diving into technicalities, ask fundamental questions:

    • Why are you starting a data platform solution?
    • What is the vision for this Data Cloud solution?
    • What are your primary use cases, and are they aligned with top business priorities?
    • How will you measure the success of the implementation?

    For optimal results, start small. Focus on one or two core use cases initially. This iterative approach allows you to:

    • Identify platform nuances.
    • Understand source systems and their data quality.
    • Develop robust data dictionaries.
    • Monitor use cases, then expand.

    Ultimately, you should catalog the available data and build a prioritized list of use cases based on their tangible business value.

    Understanding Roles and Ownership

    A Data Cloud implementation necessitates a strong partnership between IT and marketing/business teams. Clearly define who owns what:

    • CDP Administrator/Platform Owner: Manages the Data Cloud platform.
    • Data Roles: Responsible for creating data pipelines.
    • Marketing Roles: Focus on audience creation, campaign execution, and strategy.
    • Customer Insights and Analytics Teams: Leverage the unified data for reporting and analysis.

    Align these roles with your organization’s existing structure to ensure all necessary stakeholders are involved from the outset.

    Data Inventory and Quality

    This is arguably the most critical aspect of planning. Prepare a thorough data dictionary or inventory that comprehensively lists all data sources, preferred ingestion methods, necessary transformations, and how they relate to your defined use cases.

    • Field-Level Data Inspection: Scrutinize individual fields for accuracy, identify primary keys, and assess whether data needs normalization or denormalization.
    • Data Profiling Tools: These are invaluable for understanding your data. They can analyze field distribution, completion rates, and help identify relevant fields. Profiling helps confirm if your approach will stay within free credit limits and accelerates the design phase.
    • Clean Data Upstream: It cannot be stressed enough: clean and sanitize your data at the source system before ingestion. Data Cloud is a unification tool, not primarily a data cleansing or deduplication tool. Ingesting bad or unnecessary data can significantly increase credit consumption and lead to inaccurate results.
    • Prioritize Data: Avoid the common pitfall of trying to bring in “all the data”.
    • Data Type Alignment: For Zero-Copy integrations, ensuring data type alignment between your source schema (e.g., Snowflake) and Data Cloud’s data model objects (DMOs) is crucial to prevent mapping issues.
    • Unique Keys: Data Cloud operates on an upsert (update or insert) model. Ensure every row in your data files has a unique key (either a single field or a composite key) to prevent incorrect merging of records during ingestion.

    Phase 2: Architecture and Setup – Building the Foundation

    Once the planning is complete, the next phase involves architecting and setting up Data Cloud to receive and process your data.

    Connector Selection and Data Ingestion

    Salesforce Data Cloud offers flexible ways to ingest data:

    • Out-of-the-Box (OOTB) Connectors:
      • Prioritize using OOTB connectors for Salesforce CRM, Marketing Cloud, Commerce Cloud, Amazon S3, Google Cloud Storage, and Azure. These are pre-built and minimize effort.
    • Ingestion API (Batch vs. Streaming):
      • Batch Ingestion: Ideal for front-loading historical data or ingesting large volumes at scheduled, off-peak hours. Data is typically sent in CSV format.
      • Streaming Ingestion: Designed for near real-time ingestion of small batches of data, such as user actions on websites or POS system events. Data is typically sent in JSON format.
      • Setup Process: First, create an Ingestion API connector, which defines the expected schema and data format. Then, create a data stream for each object you intend to ingest through that connector.
      • Authentication: Secure API calls require setting up Connected Apps in Salesforce, leveraging OAuth flows like JWT for authentication.
      • API Limits: Be aware of limitations, such as 250 requests per second for streaming APIs and a 200 KB payload size per request. These are important for designing your ingestion strategy.
      • Schema Mistakes: If you get a data type wrong in your schema, you generally cannot change it directly after creation.
    • Web & Mobile SDK:
      • Developers specifically tailor these SDKs to capture interaction data from websites and mobile applications, such as page views and clicks.
      • Key Benefits: They come with built-in identity tracking (managing both anonymous and known user profiles) and cookie management, simplifying the process of linking anonymous activity to known profiles once a user identifies themselves.
      • Consent Management: The SDKs also include integrated consent management, ensuring data is only collected and used with user permission.
      • Sitemap: A powerful feature that allows for centralized data capture logic across multiple web pages, reducing the need to embed code on every page.
      • Experience Cloud Integration: For Experience Cloud sites, a new integration feature provides a data kit that simplifies setup and automatically captures standard events.
      • SDK vs. Ingestion API for Web: For web and mobile applications, the SDK is generally preferred over the Ingestion API because it handles authentication more securely (no client-side exposure) and streamlines data capture.
    • Zero-Copy Integration:
      • This revolutionary feature allows Data Cloud to directly access live data stored in external data lakes and warehouses like Snowflake, Databricks, Google BigQuery, and AWS (S3, Redshift) without physically moving or duplicating the data.
      • Advantages: Offers near real-time data access, eliminates data duplication, and extends the value of existing data lake/warehouse investments.
      • Important Considerations: Data type alignment between your source system and Data Cloud is critical for successful mapping. Also, be prepared for network and security configurations (e.g., VPC, IP whitelisting) to ensure secure connectivity between Data Cloud (hosted on AWS) and your external cloud environments.

    Data Harmonization and Modeling

    After data is ingested into Data Cloud, it enters the harmonization and modeling stage:

    • Data Lake Objects (DLOs): When data first enters Data Cloud, it’s stored in DLOs, which are essentially raw, un-transformed representations of your source data.
    • Data Model Objects (DMOs): DMOs represent Data Cloud’s canonical data model. The next crucial step is to map your DLOs to DMOs, transforming the raw data into a standardized structure that Data Cloud understands and uses for downstream processes.
    • Standard vs. Custom DMOs/Fields: Data Cloud provides standard DMOs (e.g., Account, Contact, Individual). Leverage these where possible. For unique business requirements or custom fields from your source systems, you have the flexibility to create custom DMOs or add custom fields to standard DMOs.
    • Formula Fields: These are powerful tools within Data Cloud, similar to Salesforce CRM formulas. Use them to augment your data (e.g., create composite unique keys for identity resolution) or cast data types if mismatches occurred during ingestion.
    • Interim DLOs: In complex scenarios, consider creating “interim DLOs.” These can be used as an intermediate step to maintain additional business context, perform standardization, or scrub data before it’s mapped to the final target DMOs.
    • Data Categories: When setting up data streams, you assign a category to the data, which influences how it’s used:
      • Profile Data: Contains identification information (like name, email, address) and is crucial for identity resolution.
      • Engagement Data: Represents event-driven interactions (e.g., website clicks, purchases, mobile app logins). This data is typically used for aggregated statistics and behavioral insights.
      • Other: For data that doesn’t fit neatly into the above categories.
    • Data Spaces: Data Cloud allows you to logically separate data using data spaces. These function similarly to business units in Marketing Cloud, enabling you to manage data for different regions, brands, or entities, and ensuring compliance with regulations like PDPA, GDPR, or CCPA by controlling data visibility and access.
    • Relational Model: Maintain a comprehensive data dictionary that details your entire data model, including relationships between DLOs and DMOs.

    Phase 3: Unification

    With your data ingested and harmonized, the next critical phase is unification, where disparate customer profiles are brought together into a single, comprehensive view.

    Identity Resolution

    Identity Resolution is the core capability that enables Data Cloud to build a single, unified customer profile from various data sources. This process is crucial to:

    • Avoid inflating your customer metrics.
    • Prevent sending redundant communications.
    • Enhance personalization across all touch points.

    The identity resolution process is typically two-fold:

    1. Matching Rules: These rules define the criteria for identifying when different records belong to the same individual. Examples include using fuzzy matching for first names (allowing for minor variations), exact matching for last names and email addresses, or linking records based on social handles.
      • Party Identification Model: Leverage external identifiers like loyalty member IDs or driver’s license numbers to enhance matching accuracy. This model helps link profiles across systems that might not share common direct identifiers.
      • Required Match Elements: Be aware of specific requirements when unifying accounts or individuals.
    2. Reconciliation Rules: Once potential matches are identified, reconciliation rules determine which attribute values will represent the unified profile. For instance, if a customer has multiple email addresses across different source systems, you can define rules to select the “most frequent” email, or prioritize data from a “source of truth” system.

    Key Considerations for Identity Resolution:

    • Thorough Data Understanding: A deep understanding of your data, including unique IDs, field values, and relationships, is paramount for configuring effective matching and reconciliation rules.
    • Start with Unified Profiles Early: Even if your initial match rates are low, begin building calculated insights and segments against unified profiles from the outset. This prepares your Data Cloud environment for seamless integration of new data sources in the future.
    • Credit Consumption: Identity resolution is a credit-intensive operation (e.g., 100,000 credits per million rows processed). While incremental processing is improving efficiency, careful planning of how often identity resolution runs is essential to manage costs.
    • Anonymous Data: By default, the Marketing Cloud Personalization connector sends events only for known users. Enabling anonymous events drastically increases data volume and credit consumption, and you should note that Data Cloud doesn’t reconcile anonymous events to known users out of the box. You’ll need to implement custom solutions for that reconciliation.
    • Data Quality is Paramount: The success of identity resolution hinges on the quality of your incoming data. If your source systems contain “garbage” (inaccurate or inconsistent data), your unified profiles will reflect that. Therefore, prioritize cleaning your source data before bringing it into Data Cloud.

    Phase 4: Activation – Turning Data Into Actions

    The final, and arguably most impactful, phase is activation. This is where you use your unified, intelligent data to drive personalized customer experiences and automate workflows across various channels.

    Calculated Insights

    Calculated Insights allow you to perform aggregations and transformations on your data to derive meaningful metrics. These can include:

    • Customer Lifetime Value (LTV)
    • Engagement Scores
    • Total Deposit per Month
    • Propensity to Buy

    These insights enrich your unified customer profiles, providing deeper understanding and enabling more sophisticated segmentation and personalization strategies.

    Segmentation

    Data Cloud’s segmentation capabilities enable you to create dynamic audience segments based on any harmonized attribute or calculated insight. This allows for precise targeting of specific customer groups.

    • Building Segments: Use the intuitive segment builder to drag and drop fields and apply criteria. You can combine rules with AND/OR logic to refine your audience.
    • Nested Segments: This feature allows you to incorporate one segment within another. However, be mindful of limitations, such as a maximum of 50 filters per segment.
    • Publishing: Publish segments to various activation targets. While Marketing Cloud Personalization supports only “standard publish,” other targets might allow “rapid publish” for faster audience delivery.

    Activation Targets and Activations

    After creating segments or calculated insights, you define activation targets, the destinations where you send this actionable data. Data Cloud offers broad activation capabilities:

    • Marketing Cloud: Push segments into Marketing Cloud data extensions for email personalization and Journey Builder entry events. You can also use Data Cloud data to influence different journey paths within Marketing Cloud, for example, by attaching custom attributes to Contact Builder.
    • Advertising Platforms: Directly send customer segments to major advertising platforms like Google, Meta, and Amazon for targeted campaigns.
    • Salesforce Flow: Initiate real-time Salesforce automation (Flows) based on data changes, calculated insights, or streaming events processed by Data Cloud. You can configure this via Data Actions.
    • Webhooks: Data Actions can also trigger webhooks to send data to virtually any third-party system.
    • Data Lakes & Warehouses: Securely share harmonized profiles, segments, or insights back to external platforms like Snowflake, Databricks, or Google BigQuery.
    • Business Applications: Push unified data or activate segments directly into other downstream business applications like ERP systems or other analytics tools.

    Platform Monitoring

    Consistent monitoring of your Data Cloud platform is crucial post-implementation. This includes:

    • API Ingestion Monitoring: Track data flow from MuleSoft or other APIs to Data Cloud.
    • Segment Publications: Verify that segments are publishing correctly and yielding expected results. Issues can occur if upstream data ingestion or unification breaks.
    • Activations: Ensure data is successfully reaching its intended activation targets.
    • Status Alerts: Subscribe to status.salesforce.com for updates on your instance to stay informed about any maintenance or performance degradations.

    Key Lessons Learned & Continuous Evolution

    Salesforce Data Cloud is a dynamic product that undergoes rapid evolution, with new features and changes rolling out frequently, often on a monthly basis, outside of the major seasonal releases. Staying current is key to maximizing your investment.

    Key lessons from real-world implementations:

    • Stay Connected: Maintain close communication with your Salesforce account team, participate in partner Slack channels, and engage with Trailblazer communities. This helps you stay informed about upcoming features, pilot programs, and best practices.
    • Non-Reversible Data Ingestion: Be extremely diligent in your planning, especially regarding data types and unique keys. Correcting bad data types or core stream elements after you ingest and activate data is highly difficult and often requires you to delete downstream segments, calculated insights, and even DLO/DMO mappings to re-implement. Plan ahead to avoid costly rework.
    • Marketing Cloud Connector Caution: The Marketing Cloud connector will bring in all subscriber data from your Marketing Cloud instance, including data from multiple business units. This can significantly impact your profile counts and potentially lead to overages if not anticipated and managed. Understand what’s in your “all subscribers” table before connecting.
    • Consumption Costs: Data Cloud operates on a consumption-based model, so every operation has a cost.
      • Data Ingestion: Volume of data ingested directly impacts cost.
      • Batch Transforms: These process the entire dataset for every execution, potentially burning significant credits even if data hasn’t changed.
      • Identity Resolution: This is a credit-intensive process.
      • Segmentation: Publishing segments also consumes credits. Carefully plan your data volumes, refresh schedules, and automation frequencies to manage and optimize credit consumption.
    • Zero-Copy Considerations: While revolutionary, ensure data type alignment between your source systems (e.g., Snowflake, Redshift) and Data Cloud. Also, factor in time for network and security setup for private connections between cloud environments.
    • Optimize Journeys for Data Cloud: Instead of trying to force Data Cloud activations into existing, potentially inefficient Marketing Cloud Journey structures, take the opportunity to remediate and optimize your journeys for best practices aligned with Data Cloud’s capabilities.
    • Data Cloud is NOT a Cleansing Tool: Reiterate this fundamental point: Data Cloud is primarily a data unification tool, not a data cleansing tool. It is your duty to ensure your source data is clean and accurate before it enters Data Cloud.
    • No Master Data Management (MDM) Solution: Data Cloud adopts a “key ring” approach to identity, focusing on linking various identifiers to a unified profile, rather than aiming to be a traditional “golden record” MDM solution.
    • Consent Management: The Web SDK includes built-in consent management. If you are using the Ingestion API, you will need to implement custom solutions to handle user consent requirements.
    • AI Integration: Data Cloud offers robust AI capabilities. You can build your own regression models using Einstein Studio with your Data Cloud data, or integrate external AI models from platforms like Amazon SageMaker, Google Vertex AI, Data Bricks, and even large language models from OpenAI or Azure OpenAI. This enables predictive analytics and smarter decision-making.

    Conclusion

    Salesforce Data Cloud represents a significant step forward in leveraging customer data. By breaking down silos, unifying profiles, and providing powerful activation capabilities, it empowers businesses to deliver hyper-personalized experiences and drive intelligent actions across their entire enterprise.

    To get started, you need to take a strategic approach, plan carefully, understand your data deeply, and commit to continuous learning as the platform evolves. By prioritizing use cases, ensuring data quality upstream, and leveraging the diverse ingestion and activation methods, you can successfully implement Data Cloud and unlock the full value of your customer insights. The journey may present challenges, but a truly unified and actionable customer view – once implemented and maintained effectively – will be a precious asset for your business.

    Explore related content:

    Bring Customer Data into Slack with Salesforce Channels

    How to Earn the Salesforce Data Cloud Consultant Certification

    Can You Use DML or SOQL Inside the Loop?

    How to Quickly Build a Salesforce-Native Satisfaction Survey Using SurveyVista

    #DataCloud #MarketingCloud #Salesforce #SalesforceAdmins #SalesforceDevelopers

    Image of a customer's face surrounded by unified customer information from email to sale.
    2025-06-10

    Display Product and Price Book Entry Fields in the Same Flow Data Table

    The Salesforce Flow Data Table component is a powerful screen element that allows users to view and interact with records in a structured, spreadsheet-like format within a Flow. It supports features like record selection, sorting, and filtering, making it ideal for building guided user experiences. For example, in a product selection use case, a sales rep can launch a Flow that displays a list of products retrieved from the Product2 or PriceBookEntry objects. Using the data table, the rep can easily compare options and select multiple products to add to an opportunity, all within a single, streamlined Flow screen.

    The data table component has been added to Salesforce based on the success of Eric Smith’s open source data table component published on UnofficialSF. The out of the box component is still not as powerful as the unofficialSF sibling.

    In this post, I will show you how I leveraged the transform element inner join functionality to bring together Product2 or PriceBookEntry field values which I showed in the unofficial SF data table component.

    The inner join functionality is a powerful one. It falls short of its full potential, because flow builder does not offer a way for us to generate custom data types to hold the information we bring together.

    I created a placeholder Apex-defined data type which I used on the output side of the transform element. The unofficial SF data table supports the display of Apex-defined collection data. Leveraging this functionality, I brought the field values of both Product and Price Book Entry objects for the user to make an informed product selection.

    🚨 Use case 👇🏼

    User will select products and add them to the opportunity record. When making the selection, user should be able to see product information and price book entry information from the selected price book on the same row: Product name, code, family, description and unit price.

    Apex-Defined Data Types in Flow

    Apex-Defined Data Types allow developers to create custom, structured objects in Apex that can be used as inputs and outputs within Flow. These types enable more complex data handling than standard Flow variables, supporting multiple fields, including nested data, within a single variable. For example, you might define an Apex class that bundles together a product’s name, price, discount, and inventory status, then use it in a Flow to display custom pricing logic or pass structured data between Flow and Apex actions. This approach enhances flexibility and scalability when building advanced automation.

    The key to defining an Apex-defined data type available for flow is the @AuraEnabled annotation in the Apex class. Once you write an Apex class that defines the fields in the Apex-defined object and deploy it to production, you don’t need to do anything in the flow builder to make this data type available in flow. In the areas where and Apex-defined resource selection is allowed, the new data type will be accessible.

    I decided to create an Apex-defined data type with various multiple fields that I can use in the flow builder. The fields I generated are:

    • 4 strings
    • 2 numbers
    • 2 currency fields
    • 1 boolean (checkbox)

    Here is the simple (the name says complex, but it is simple) Apex code that does the trick:

    /** * ComplexDataCollection - Apex-defined data type for Salesforce Flow */public class ComplexDataCollection {        @AuraEnabled    public String string1 { get; set; }    @AuraEnabled    public String string2 { get; set; }    @AuraEnabled    public String string3 { get; set; }    @AuraEnabled    public String string4 { get; set; }    @AuraEnabled     public Decimal number1 { get; set; }       @AuraEnabled    public Decimal number2 { get; set; }       @AuraEnabled    public Decimal currency1 { get; set; }        @AuraEnabled    public Decimal currency2 { get; set; }     @AuraEnabled    public Boolean boolean1 { get; set; }  }

    You will need a test class to deploy this code to production. That should be easy especially with the help of AI, but let me know if you need me post the test class.

    Transform and Join Product and Price Book Entry Field Values to Populate the Apex-Defined Data Type

    Follow these steps to prepare your data for the data table component:

    1. Get all the Price Book Entries for one Price Book.
    2. Get all the Products in the Org (limit your get at 2,000 records for good measure).
    3. Join the two collections in the transform element using the Product2 Id.
    4. Map the fields from source collections to the Apex-defined data type.

    Here is more detail about the transform element configuration:

    1. Add the transform element.
    2. Add the price book entries collection from the get element on the left side.
    3. Add the product collection on the left side.
    4. Add an Apex-defined collection on the right side. In my case this is called “ComplexDataCollection“. Search by name. Make sure you check the collection checkbox.
    5. Click on the first collection on the left side at the top collection level (not next to the individual fields). Connect this to the collection on the right side. You will see instructions for inner join.
    6. Click on the second collection on the left side. You should see a join configuration screen. Configure your join. More instructions will follow.

    Configure your join:

    1. Left source and right source order does not matter for inner join. Select both collections on the left side.
    2. The join key will be Product2 on the PriceBookEntry and Id on the Product2.
    3. Select the fields you want on the output. For me these are: Name, ProductCode, UnitPrice, Family, Description. I added also isActive which I did not end up using in the data table.
    4. Map these to your Apex-defined object fields: string1 through string4, currency1 and boolean1 (if you want isActive).

    Your configured transform join should look like the screen image below.

    Prepare the Apex-Defined Object Data for the Data Table

    UnofficialSF data table supports Apex-Defined objects, but requires that the input is serialized. The data table cannot process Apex-Defined collection data as input. It expects a JSON format. More on that is available on Eric Smith’s post HERE.

    To achieve this, you can either leverage Apex, or so the processing in flow. I tried both ways, and both methods works. Flow method requires looping.

    Here is the Apex code for the invocable action that serializes the data:

    /**  *  *  Sample Apex Class Template to get data from a Flow,  *  Process the data, and Send data back to the Flow *  This example translates an Apex-Defined Variable  *  between a Collection of Object Records and a Seraialized String *  Eric Smith - May 2020 * **/ public with sharing class TranslateApexDefinedRecords {         // *** Apex Class Name ***    // Attributes passed in from the Flow    public class Requests {            @InvocableVariable(label='Input Record String')        public String inputString;        @InvocableVariable(label='Input Record Collection')        public List inputCollection;     // *** Apex-Defined Class Descriptor Name ***    }    // Attributes passed back to the Flow    public class Results {        @InvocableVariable        public String outputString;        @InvocableVariable        public List outputCollection;    // *** Apex-Defined Class Descriptor Name ***    }    // Expose this Action to the Flow    @InvocableMethod    public static List translateADR(List requestList) {        // Instantiate the record collection        List tcdList = new List();    // *** Apex-Defined Class Descriptor Name ***        // Prepare the response to send back to the Flow        Results response = new Results();        List responseWrapper = new List();        // Bulkify proccessing of multiple requests        for (Requests req : requestList) {            // Get Input Value(s)            String inputString = req.inputString;            tcdList = req.inputCollection;// BEGIN APEX ACTION PROCESSING LOGIC            // Convert Serialized String to Record Collection            List collectionOutput = new List();   // *** Apex-Defined Class Descriptor Name ***            if (inputString != null && inputString.length() > 0) {                collectionOutput = (List)System.JSON.deserialize(inputString, List.class);    // *** Apex-Defined Class Descriptor Name ***            }            // Convert Record Collection to Serialized String            String stringOutput = JSON.serialize(tcdList);// END APEX ACTION PROCESSING LOGIC            // Set Output Values            response.outputString = stringOutput;            response.outputCollection = collectionOutput;            responseWrapper.add(response);        }        // Return values back to the Flow        return responseWrapper;    }}

    Please note that this code refers to the name of the first Apex class. If you change the name, you will need to replace the references here, as well. Source: Eric Smith’s Blog.

    See how the action will be used and configured in the image below.

    Data Table Configuration

    Here is how you configure the data table for this data:

    1. Give your data table and API name
    2. Scroll down to the advanced section and check the checkbox titled Input data is Apex-Defined.
    3. Add the string variable you used to assign the value of the translate action output to Datatable Record String.
    4. For the required unique Key Field input use the string that has the product code. For me this is string2.
    5. To configure Column Fields add string1,string2,string3,string4,currency1 there.
    6. Add 1:Name,2:Code,3:Description,4:Family,5:Price for Column Labels.
    7. Configure Column Types by adding 1:text,2:text,3:text,4:text,5:currency there.

    Once completed, you should see a similar output to this image below.

    Conclusion

    While this example illustrates the way Apex can boost the capabilities of flow, it is very cumbersome to set up this solution to leverage Apex-defined data types in the flow builder and in the data table.

    This was more of an experiment than a solution I will use frequently.

    If you don’t want to write code, you can easily create a custom placeholder object to achieve a similar result with the out of the box data table component.

    I look forward to having this functionality built into the flow builder in the coming releases. I hope Salesforce product teams will prioritize this.

    Explore related content:

    How to Use the Data Table Component in Screen Flow

    Send Salesforce Reports and Dashboards to Slack with Flow

    How to Use the Repeater Component in Screen Flow

    London’s Calling and Antipatterns to Look For in Flow

    #DataTable #InnerJoin #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #TransformElement

    Display Product and Price Book Entry Fields in the Same Flow Data TableTransform Inner Join ConfigurationCompleted Transform Join ConfigurationTranslate and Serialize Action
    2025-05-23

    Formula Resources in Criteria Conditions—Yes or No?

    Salesforce Flow is a powerhouse for automation. And when it comes to building smart, dynamic Flows, Formula Resources play a critical role. They compute values for create, update and action elements, and calculate parameters to compare to in criteria conditions. But how exactly do they work? And where should you use them?

    In this post, we’ll break down their use and explore whether we should be using them in criteria logic across various Flow elements.

    What is a Formula Resource?

    A Formula Resource in Flow is like a mini-calculator that evaluates to a single value (text, number, Boolean, or date) based on logic you define. Think of it like a formula field on an object, but used inside your Flow instead of the database.

    Formula Resources use the same syntax as formula fields, including functions, operators, and references to variables or record fields.

    Where Are Formula Resources Used?

    You can use them in many flow elements, such as:

    Decision Elements

    You can use Formula Resources in Decision outcomes to:
    • Evaluate complex conditions in a clean and reusable way.
    • Reference a single Boolean formula instead of adding it to multiple outcomes.
    Example: You might define a Formula Resource like: {!IsHighValueOpportunity} = {!Opportunity.Amount} > 100000 Then in your Decision element, you simply check if IsHighValueOpportunity = TRUE.

    Update Elements

    You can use formula resources in two ways in update elements:
    • On the right side of a criteria condition that determines when to execute the update.
    • On the right side of the field update to calculate and determine the new field value.

    Get and Collection Filter Elements

    You can use a formula resource on the right side of the criteria condition to specify which records you want in your output while configuring these elements.

    Assignment, Create and Action Elements

    While these elements don’t have criteria conditions in them, they can utilize formula resources to compute field and the parameter values.

    Example:

    You might use a Formula Resource like: {!TodayPlusSeven} = {!$Flow.CurrentDate} + 7 And then assign this value to the due date of a task or close date of an opportunity.

    Benefits of Using Formula Resources in Criteria

    Using Formula Resources gives you an advantage when you want to use them again in your flow. Reusability would be the biggest advantage of using a formula resource in a criteria condition rather than building the logic in the element line by line.

    One could also argue that formula resources can handle complex logic better in certain situations.

    Formula Resource or Multi-line Criteria Conditions

    Instead of inserting formula resources in criteria for decisions and updates, you should consider building multi-line conditions combined with AND and OR operators. Using formula resources may have negative performance impact on larger flows with many of them. They are computed several times throughout the execution of the flow, which may be more resource draining than building a multi-line criteria condition inside one element.

    If you are not worried about performance in your particular case, or this is not a record-triggered flow, then this may not be a concern.

    There are several other advantages of building criteria conditions directly inside an element like a decision:

    • Readability: Even if you find a very descriptive name for your formula resource and add a description to it, it becomes a black box that you have to open, in order to understand the logic.
    • Maintenance: Unless you use the same formula resource more than once in your flow, clicking through multiple formula resources to understand and update the logic can be more difficult than doing the same inside the element.
    • Ease of debug: Your debug log can show more detail about how your criteria condition logic evaluates the data compared to the a formula resource that just returns a boolean value (true/false).

    Pro Tip 1: When setting up formula resources, prefer returning a value to compare to, rather than a boolean value if your use case supports this. Example: Prefer returning the difference of days between today and the record created date, rather than setting up an IsRecent boolean formula resource that returns true when the record was created in the last seven days.

    Pro Tip 2: If you need a not contains criteria condition and only see contains, you can go to custom logic in most cases and add a NOT() around the criteria condition with the contains clause.

    Conclusion

    Use formula resources only when you definitely need them. Consider setting up multi-line conditions combined with AND and OR operators instead. Name them clearly and add comments and descriptions.

    If your use case requires setting up complex formula resources, break down the formula in smaller pieces and test them separately, before you put the whole thing together. Sometimes it may make sense to create a formula field on the object temporarily, when building a complex formula. This way, you can see the result of the computation immediately on multiple records (leverage list views).

    Remember that the comment syntax used for Apex also works in formula resources. You can use it to add comments to complex formulas, like this: /* Example: This comment can wrap over multiple lines. */.

    This post is part of our Best Series collection. Read the other posts HERE.

    Explore related content:

    How To Build Flex and Field Generation Prompt Templates in the Prompt Builder

    Start Autolaunched Flow Approvals From A Button

    Can You Start With a Loop Inside Your Schedule-Triggered Flow?

    Display Product and Price Book Entry Fields in the Same Flow Data Table

    A Comparative Look at Flow Decision Elements in Salesforce

    #Apex #BestPractices #FormulaResources #LowCode #NoCode #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials
    Should You Insert Formula Resources In Criteria Conditions?Should You Insert Formula Resources In Criteria Conditions?
    2025-05-17

    Get Ready for the New Time Data Type – Summer ‘25 Flow Goodness

    Salesforce Flow is constantly evolving, bringing us enhancements that make our lives as admins, developers, and business users much easier. The Summer ‘25 release is described as a big one, packed with substantial updates and quality-of-life improvements. Among these exciting additions is a feature many have been waiting for: native support for the Time data type in Flow.

    What is the Time Data Type and Why is it Important?

    The new Time data type is specifically designed for situations where the time of day matters, but the date does not. Previously, handling time-specific data in Flow without including the date could be complex. Summer ’25 changes that, allowing you to process data focused purely on time, down to the millisecond.

    This capability is incredibly handy for a variety of use cases:

    • Managing communication times, such as determining when to send an email.
    • Checking if actions occur within specific business hours.
    • Creating flows to send reminders based on a time before an event, like an email reminder 30 minutes before a meeting.

    Where Can You Use the Time Data Type in Flow?

    The Time data type is available across a wide range of Flow features, providing flexibility in how you build your automations. You can use Time fields and resources in:

    • Various Flow elements, including Action, Assignment, Collection Filter, Collection Sort, Create Records, Delete Records, Decision, Get Records, Subflow, Transform, Update Records, and Wait for Conditions.
    • Formula builder and expression builder.
    • Resources such as variables and constants.
    • As input and output for invocable actions.

    When working with time values, you should use the hh:mm:ss.SSS AM/PM format, though including seconds or milliseconds is optional. For instance, 9:00 AM, 5:30:05 PM, and 14:45:53.650 PM are all valid time values.

    New and Improved Time Functions

    To complement the new data type, Salesforce Flow also introduces or enhances formula functions specifically for working with time. In the formula editor, you can now effectively use functions such as HOUR(), MINUTE(), SECOND(), MILLISECOND(), TIMENOW(), and TIMEVALUE(). These functions empower you to perform calculations and make decisions based on time data within your flows. Previously, extracting and manipulating time in Date/Time fields was very difficult, and it involved parsing text values that contained this information.

    Important Considerations

    • The Time data type is currently not supported in the offline flows available on the Salesforce Mobile app.
    • This change applies to flows running on API version 64.0 or later. If you have existing flows created with API version 63.0 or earlier that use custom fields of the time data type, they will continue to work as before. However, to leverage the full functionality of the updated time data type in those flows, you’ll need to edit them and save them as a new version configured to run on API version 64.0.

    Random Number Generation

    One benefit of the new time-related capabilities is that you can use the new functions to generate random numbers. There is no random number generator function available in flow. Previously, I extracted the seconds out of a Date/Time value to generate a random number, now I can generate one using the Milliseconds.

    🚨 Use Case 👇🏼

    Select multiple leads on a data table to add them to a prize drawing. Generate a random number and determine the winner. Email the winner to communicate the prize they won.

    For this use case I leveraged many new flow functionalities.

    Let’s get right to the build.

    Build the Screen Action Autolaunched Flow

    The selected leads can span over several screens in the data table, when the user is completing their selection. I decided to use an autolaunched flow to compile a CSV list of lead names which will be shown under the data table, as the user is completing their selection.

    For that I build an autolaunched flow. Follow these instructions to build yours:

    1. Start an autolaunched flow.
    2. Create a Lead Collection Record Variable and make it available for input.
    3. Create a Name CSV Text Variable and make it available for output.
    4. Use the transform element to extract a text collection variable of names (first names) out of the lead collection record variable (not required, I wanted to use this new feature).
    5. Loop the names collection text variable.
    6. Add an assignment to add the current name text, and then a comma and a space character to the Name CSV Text Variable.
    7. Outside the loop add another assignment to assign a new value to the Name CSV Text Variable. This new value will be the accumulated names in csv format with the last comma and the space character removed. Use a formula resource to compute the value. The formula is: LEFT({!NameCSVTextVar},LEN({!NameCSVTextVar})-2)
    8. Debug, save and activate the flow.

    Build the Screen Flow

    Follow these instruction to build your flow:

      1. Start a screen flow.
      2. Get the leads in the org where the email is no null (limit the get to 2,000 records not to hit limits).
      3. Add a screen. Place a data table on the screen showing the leads, and allow for multi selection. Add a screen action to the screen and point it to the autolaunched flow you created above. Pass the Lead Data Table Selected Rows to the screen action autolaunched flow as input.
      4. Assign the count to a Count Number Variable (no decimals). Also assign the winner number to a Winner Count Variable. This is to ensure that the number does not change in debug (I don’t think it will change in production execution). You will need a formula resource to determine the winner. Here is what this formula does: Generate a number between 1 and 1,000 using the milliseconds value of the time of the execution, and prorate that using the number of leads the user selected to determine the winning number. Assign the following formula value to the Winner Count Variable:ROUND(((MILLISECOND(TIMENOW())+1)*{!CountLeadsVar}/1000),0)+1
      5. Loop the Lead Data Table Selected Rows and assign a value incremented by 1 to a counter variable in every iteration (CounterVar Add 1).
      6. Check Via a decision whether the winning number is equal to the counter variable.
      7. If the winner is determined assign the Lead Record to the Winning Lead Record Variable, and exit loop. If not, keep looping.
      8. Outside the loop send the email to the email address of the winning lead and congratulate them. I built my email template with inside the brand new email action element for this one (Summer ’25).
      9. Debug, save and activate the flow.

    Please note that, I tried conditionally running the screen action only after the user selects the first data table row, but that functionality (Summer ’25) does not seem to work properly in preview. I have a ticket open with Salesforce to determine whether that is a bug.

    If you want to see the flow in action, watch this video.

    Conclusion

    The introduction of the Time data type is a significant step forward for Flow, enabling more precise and efficient time-based automation. It’s one of the many high-impact features and quality-of-life improvements packed into the Summer ’25 release that are bound to make your job easier.

    Ready to give it a spin? Don’t forget to sign up for a pre-release org to test out this and other new features! You can also find more details in the Summer ’25 release notes.

    Explore related content:

    Salesforce Summer ’25 Preview: Major Flow Changes to Watch For

    Time Zone and Time Operations in Flow

    Supercharge Your Approvals with Salesforce Flow Approval Processes

    #AutolaunchedFlow #DecisionElement #GetRecords #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #Summer25 #Time #TimeDataType

    Get Ready for the New Time Data Type - Summer ‘25 Flow GoodnessRemove Last Comma Formula ResourceScreen Action Autolaunched FlowTime Function Random Number Formula
    2025-05-14

    Error Busters: Guide to Solving Flow and Apex Trigger Errors in Salesforce

    Hey there, fellow Trailblazers! Ever hit a snag with a Flow or an Apex trigger and felt like you needed a magnifying glass and a detective hat? You’re not alone! Errors happen, but luckily, Salesforce gives us some fantastic tools and strategies to figure out what went wrong. Think of troubleshooting less like a chore and more like some fun detective work.

    Last week I attended MidAtlantic Dreamin where I presented on this topic. You can download my slides using the link on the bottom of this page. But before that, let me give you an overview of the content.

    Here are my tips for solving flow and Apex trigger errors in Salesforce.

    Starting Your Detective Work: The Essential First Steps

    When an error pops up, don’t panic! Take a deep breath and start with the basics:

    1. Read the Error Message Carefully:This might sound obvious, but error messages are your first clue! For Flows, check the error message at the top and bottom of any fault emails you receive – they can be super descriptive. For Apex, you’ll get unhandled exception emails; make sure to read these closely too. Sometimes the message itself tells you exactly what happened.
    2. Inspect Your Test Data: Make sure the data you’re using for testing is appropriate. If you’re working with Flows, try not to reuse old records. Why? They might have weird field value combinations due to recent changes in automation or validation rules. For Apex, your test classes should generate appropriate test data.
    3. Become a Debug Log Pro: The debug log is your absolute best friend! It shows you exactly what happened during the execution of your Flow or Apex trigger. For Flows, try reading the log like a book from top to bottom to trace the path. For Apex, the log shows variable values and exceptions. You can set up a detailed user trace for the user running the automation to capture all the gory details.
    4. Check Fault Emails (and Failed Flow Interviews): If a production Flow fails and doesn’t have a fault path (more on those later!), an administrator or the creator gets a fault email. This email is golden! It has a link directly to the failed interview, showing you a stop sign icon, and often highlights the failing element. You can also check the Paused and Failed Flow Interviews screen in Setup for details and debug log links. Apex failures send unhandled exception emails. Read those emails carefully.

    Flow-Specific Super Skills

    Flows have some unique debugging powers. Make sure you use them!

    • Debug As You Build: Don’t wait until your Flow is a giant spaghetti monster! Debug it in meaningful parts right after saving each section. This keeps the debug log manageable and helps you catch issues early.
    • Use the Flow Debugger Extensively: This built-in tool is fantastic! You can skip start conditions, run as a different user (great for checking permissions!), and inspect variable values at every step.
    • Look for Stop Signs: When debugging, if you see a stop sign, that means an error happened during that debug run. Also, make sure the path followed is highlighted all the way to the end for a successful execution.

    • Test Activated Flows in a Sandbox: Debugging is awesome, but the real test is when the Flow is activated and running in a sandbox environment. This simulates production better.
    • Understand Transaction Boundaries: Be aware that errors in one part of a screen Flow transaction might not roll back changes from a previous successful step unless you use a Roll Back Records element. This is especially important with screen Flows and data manipulation. Know that you may get a partial save and commit for collection operations, when using the fault path in a flow.
    • Reverting Versions: If an error started happening after a recent change, consider reverting to a previous working version. Sometimes, a Flow can even become corrupt and you might need to clone a working version or build a new one.
    • Scheduled Path Flow Testing: For these, debug each scheduled path separately. When initially testing, set the schedule to run one minute later for quick results. You can monitor pending actions in the Time-Based Workflow screen.

    Apex Trigger Troubleshooting Tactics

    Apex triggers require a slightly different approach:
    • Isolate the Trigger: If you have several triggers on the same object, try to figure out which one is causing the error.
    • Review Your Logic: Go through the code carefully, looking at conditions, loops, and DML operations to pinpoint the source of the error.
    • Consider a Trigger Handler Framework: If you have many triggers on an object, using a framework can help manage and orchestrate them, making isolation, testing and debugging easier.

    Tips for Both Flows and Apex: The Universal Rules

    These gems apply whether you’re building with clicks or code:
    • Master the Order of Execution: Salesforce has specific steps it follows when a record is saved. Understanding how Flows, Apex, Workflow Rules, and Process Builders interact is crucial. Knowing the order helps you find conflicts.
    • Respect Governor Limits: Salesforce sets limits on things like how many SOQL queries or DML operations you can do in a transaction. Going over these limits causes errors! A common mistake is doing operations inside loops. Flows and Apex even have some separate limits (e.g. scheduled jobs).

    • Simplify or Isolate Complex Automation: If your automation is huge and complicated, try breaking it down or isolating the specific part where you think the error is happening.
    • Check Permissions and Access: Does the user running the Flow or trigger have the right permissions for the objects and fields involved? Test running Flows as different users to check this.
    • Look at Related Automation: Sometimes, an error in your Flow or trigger is actually caused by a validation rule or another piece of automation running on the same object. Deactivate or examine related automation to see if they’re the culprits. Using Custom Metadata Type switches can help with temporarily disabling automation for testing (see Using Custom Metadata Types in Flows Without Get).
    • Search Online! If you get a weird error message, chances are someone else has seen it before! Search Salesforce forums, Stack Exchange, and other communities. Leverage AI LLM models.
    • Ask for Help: Don’t bash your head against the wall alone! Collaborate with other admins or developers. Sometimes a fresh pair of eyes is all you need. Join the Flownatic Slack Workspace.
    If you experience frequent issues, consider setting up a logger like Nebula logger (Jonathan Gillespie – jongpie on GitHub). Troubleshooting might seem a bit daunting at first, but with these strategies, you’ll be zapping those errors like a pro! Just remember to be Sherlock Holmes, look for the little signs, and enjoy the process! Download my slideset from MidAtlantic Dreamin here. Happy Debugging! Explore related content: How to Set Up Automated Email Alerts in Salesforce Flow 15 Effective Salesforce Flow Debug Strategies

    Is Your Salesforce Flow Too Big?

    How to Measure Flow Performance and Why You Should Care

    #Apex #ApexTrigger #Debug #MidAtlanticDreamin #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials

    Error Busters Guide to Solving Flow and Apex Trigger Errors in Salesforce blog image. A man is looking at his computer with a magnifying glass, and he is wearing detective clothes.User Trace Debug LogPaused and Failed Flow Interviews ScreenSuccessful Flow Debug Run
    2025-05-08

    Why Traditional Support Models Fail Salesforce Managers – and How Flow Canvas Support Changes the Game

    If you’re the Salesforce platform owner at a business or nonprofit, Salesforce is the backbone of your revenue or donation engine. It routes leads, guides proposals, launches projects, and keeps every stage of the client life cycle humming. When it works, no one notices; when it doesn’t, all eyes turn to you.

    Yet most vendor-support offerings were never designed for the blend of urgency, nuance, and budget constraints that mid-sized firms face. A 2023 Intelligent CIO survey found 72 percent of CIOs call the standard vendor-support model “inadequate.” Top complaints? Lack of accountability and lack of expertise—often when third-party packages enter the picture.*

    This post unpacks why that gap exists, what it costs in real dollars (and reputation), and how a new model, Flow Canvas Support, gives operations leaders the breathing room they’ve been looking for.

    Why Mid-Sized Firms Need a Different Approach

    Large enterprises paper over support gaps with headcount: release managers, on-call admins, vendor-management teams. Mid-size firms run lean by design; your Salesforce admin often doubles as analyst, release gatekeeper, and part-time data steward.

    That’s precisely where traditional support collapses. You don’t need generic “how-to” articles. You need a specialist who understands why your project-intake-form Flow triggers the Apex code—and what happens downstream if that code errors out. And you need that person fast, not in two-three business days.

    With personalized support plans and coaching that fits real schedules, Flow Canvas is ready to step in when things break. We help admins go from barely keeping up to confidently moving forward. It’s all about helping your admins fix automation errors fast, and build the kind of team that makes Salesforce shine.

    Flow Canvas Support is your go-to resource for expert help, offering managers and teams dependable, on-demand assistance for flow errors, trigger issues, and configuration questions.

    Enter Flow Canvas Support

    Flow Canvas Support was built by architects who spent years inside professional-services orgs facing exactly those pain points. Rather than sell another rigid managed-services block, we flipped the model:

    • Fast human response – You get human response to your cases. You decide when you need real-time live support of a Salesforce support engineer.
    • Flow-and-Apex first – Deep specialization in automation, both for flow and Apex code.
    • Coaching baked in – Every incident doubles as a mini-workshop, so your admin leaves smarter than they arrived.
    • Month-to-month flexibility – Pick one of the affordable monthly plans and cancel anytime.

    Make Salesforce Work Smarter for Your Team

    Our support plans are built to stabilize your CRM operations, enhance business processes, and achieve the performance levels you targeted. Enjoy increased ROI with affordable, personal, and straightforward support from real people—not chatbots. Elevate your CRM from a source of frustration to a powerhouse of productivity, with solutions that fit perfectly with your business goals and growth trajectory.

    Your On-Demand Safety Net

    Flow Canvas Support is designed to extend the capabilities of your Salesforce team without the need for additional full-time hires. Whether you’re a business owner, manager, a developer managing multiple orgs, or a consultant needing an extra hand, Flow Canvas Support provides responsive, knowledgeable assistance precisely when it’s needed.​

    Flexible Support Plans

    Flow Canvas offers three tiered support plans to accommodate varying needs and budgets:​

    • Silver Plan: At $95/month, this plan includes up to 2 tickets per month. Live support is available at $115 per hour for low-code issues and $155 per hour for code-related challenges.​
    • Gold Plan: Priced at $195/month, it covers up to 5 tickets monthly. Live support rates are $105 per hour for low-code and $145 per hour for code issues.​
    • Platinum Plan: For $295/month, this plan offers up to 8 tickets per month, with live support at $95 per hour for low-code and $135 per hour for code-related assistance.​

    Learn more here 👉🏼 https://flow-canvas.com/flow-canvas-support/

    These plans are designed with flexibility in mind, allowing you to scale support based on your team’s capacity and workload. With no annual contracts, you maintain control over your budget and roadmap.​

    As a manager or business owner, you’re responsible for keeping operations smooth, supporting your team, and protecting your Salesforce investment. But when your admin stretches thin—firefighting bugs, chasing automation errors, and scrambling to keep up with platform changes—they miss important things. Whether your admins need backup during a big project, or you want ongoing support without the cost of another full-time hire, Flow Canvas fills the gap so your team can stay focused, confident, and capable. Supporting your admin is supporting your business.

    Why is Flow Canvas Support a Better Option?

    Expert Support Without the Overhead

    Flow Canvas Support provides the expertise of a seasoned Salesforce team without the costs associated with hiring additional full-time staff. This approach allows businesses to access high-quality support while maintaining financial flexibility.

    ​Commitment to Client Success

    Flow Canvas delivers value and ensures every client’s satisfaction. It meticulously designs each service to tackle common challenges, boost Salesforce ROI, and keep systems performing at their best.

    Boost Your ROI Today

    Embarking on your journey with Flow Canvas is straightforward. Visit the website at https://flow-canvas.com/ to explore support plans, register for courses, and learn more about how Flow Canvas experts can enhance your Salesforce experience.​

    Whether you’re seeking to alleviate the burden on your admin team, enhance your Salesforce ROI, or ensure your CRM system operates at peak efficiency, Flow Canvas offers the tools and support necessary to achieve your goals.​

    Conclusion

    Operations managers are the unsung heroes of growth. Your ability to keep Salesforce agile dictates how quickly the business can pivot, upsell, and deliver client value. Traditional support models were built to deflect tickets, not to empower you.

    Flow Canvas Support flips that script. With rapid human response, deep automation expertise, and a built-in coaching mindset, we free you from firefighting and hand you the levers of strategic impact.

    Ready to see the difference? Visit flow-canvas.com and fill out the contact form to take advantage of our complimentary assessment.

    Frequently Asked Questions

    Who is Flow Canvas Support best suited for? 

    The service is ideal for Salesforce Platform Managers and Administrators who are looking for tailored support in managing their Salesforce environments. This includes troubleshooting, customization, and optimization of Salesforce systems.

    How does ticket support work? 

    This process involves submitting specific issues or queries through a formalized system where each submission is tracked as a case. Users receive personalized solutions for their problems, and the support team manages and resolves these tickets asynchronously.

    How does live support work? 

    This stage includes scheduled real-time assistance, typically provided via Zoom. Users interact directly with support professionals to troubleshoot problems, receive immediate guidance, and solve issues on the spot without the wait associated with ticket resolution.

    How does the support for triggers and flows differentiate from Salesforce support?

    Unlike Salesforce’s broader support, Flow Canvas focuses specifically on automation and code-related issues that often require more specialized, hands-on assistance. This approach provides a more personalized and cost-effective support option for admins dealing with specific performance issues. When your issue involves AppExchange solutions, we will work with you without unnecessary handoffs often experienced with Salesforce Support plans.

    What If You Want To Stop The Service

    Support plans offer a flexible cancellation policy, allowing clients to stop service anytime without long-term commitments, making it a low-risk choice for your Salesforce support needs.

    Contact Flow Canvas Support today for a complimentary assessment!
    https://flow-canvas.com

    *Source

    Note: The information provided in this blog post is based on publicly available data from Flow Canvas’s website as of April 2025. For the most current details on services, pricing, and offerings, please refer directly to https://flow-canvas.com/.

    Explore related content:

    How To Build Custom Flow Actions For Agentforce – Planning Phase

    Stop Agentforce Dev Orgs From Expiring

    How To Build Flex and Field Generation Prompt Templates in the Prompt Builder

    #Apex #Automation #Flow #LowCode #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceSupport

    Man on the left is frustrated with his admin work. Guy on the right is on a call with Flow Canvas Support and he is happy.Flow Canvas Support Logo
    2025-04-10

    How To Build Flex and Field Generation Prompt Templates in the Prompt Builder

    Prompt engineering is one of the most important skills of our era. With the speedy adoption of AI Large Language Models, the use of language became a very important skill again. One could argue that effective natural language writing will potentially be more important than coding in the future. Prompt writing is also critical for Salesforce Agentforce.

    While writing in English – or any other language that the LLM understands – is important, we need to keep in mind how LLMs work so that we can design well-written instructions, establish guardrails, and ground our prompts to produce accurate results.

    Writing prompts is not necessarily a Salesforce skill. This is now a universally sought-after skill that everyone needs to master.

    Introduction to Prompt Builder

    Prompt Builder is a tool to facilitate the creation of AI prompts within Salesforce, enabling users to automate responses based on specific data inputs. This tool is essential for reducing inefficiency when delivering results, thereby increasing operational efficiency across various business functions. 

    Integration with Salesforce Flow and Apex

    Prompt Builder is very well integrated with Salesforce Flows and Apex. This integration allows for excellent customization and functionality expansion, enabling users to execute sophisticated prompts through elaborate workflows and automate intricate tasks. Additionally, it facilitates simple data retrieval from multiple sources, including Salesforce’s Data Cloud and external APIs.

    Salesforce Prompt Template Types

    Here are all Salesforce Prompt Template Types:

    1. Einstein AI-Generated Search Answers
      Generates concise answers from indexed content like knowledge articles or documents in response to user queries.
    2. Extract Product Mentions
      Identifies and extracts product names or references from unstructured text, such as case descriptions or chat transcripts.
    3. Field Generation
      Uses prompt templates to generate content for specific fields on Salesforce records, like call summaries or meeting notes.
    4. Flex
      A highly customizable prompt type for general-purpose AI tasks, giving you full control over inputs and outputs.
    5. Knowledge Answers
      Retrieves and summarizes information specifically from Knowledge Articles to answer questions accurately.
    6. Record Summary
      Creates summaries of Salesforce records, such as Opportunities, Cases, or Contacts, by analyzing record data and related activity.

    We will use field generation and flex types for this post.

    Real-World Applications

    Prompt template is an excellent tool when summarizing the history of a customer and producing recommended actions. This tool can also expedite the process of documenting the actions taken.

    🚨 Use case 👇🏼
    A coaching organization visits their clients and provides them with career, personal or general consultations. This is a personalized service that requires the consumption of past data and experience the organization has on the client. For this purpose, the organization will build two prompt templates: One that will generate and save preparation notes for the coach in advance of their visit on the record, and another one that will be called from the internal Agentforce panel for the coach’s preparation.

    Automating the generation of coaching notes can reduce manual processing time from 30 minutes to under 5 minutes.

    Data Model

    For this use case, I picked a simple data model. The coaching session object is related to the contact for the client. A coaching session record will be created whenever a session is scheduled or delivered. The status field determines whether a session is scheduled or completed. When a new session is scheduled, the date and time of the session is stamped via a record-triggered flow on the client contact record as the next (upcoming) session date and time.

    The client contact record also has fields that hold the client’s subscription information and their interests.

    Most importantly, all the past coaching session notes need to be considered when preparing for the next session.

    Field Generation Prompt Template

    The coaching session object includes the preparation notes rich text field. When a new coaching session is scheduled a new record is created. Before the coach goes to this session, they can go into this record and generate preparation notes for their upcoming session.

    Field Preparation and Field Generation Prompt Template Creation

    Field generation prompt template receives the coaching session record as input. When setting up this prompt template type, you need to scope it for a text or rich text field. Therefore, remember to create this field before you open your prompt builder. In this case, we will scope this prompt template for the preparation notes rich text field.

    Record Input

    Your prompt template will access all field values of the record it is built on. You can also traverse to related records. You can use field values of the parent records, but referring to related records this way is not very helpful. Your prompt template cannot see a meaningful summary of the related records.

    Template-Triggered Prompt Flow Input

    This is where flow input comes in handy. You can create a Template-Triggered Prompt Flow and make that flow accessible from your prompt template. Based on my experience, I can say it is quite tricky to make this flow available inside your prompt builder resource picker. I think there is a bug that is scheduled to be fixed soon, that is preventing the flow from showing up in the prompt builder in some cases.

    For this use case, I built a flow that retrieves all the past completed coaching sessions and pastes the relevant information into the prompt for the template to consume.

    Building the flow is not that difficult, but you will need to loop if you want to summarize multiple records.

    Here is how you build the flow:

    • Start a Template-Triggered Prompt Flow.
    • Configure the start element: Choose Automatic Inputs, Field Generation Template Capability Type and point your start element to the Coaching Session object.

    • Get all completed coaching sessions related to the same contact on the triggering record.
    • Loop the coaching sessions.
    • Add an Add to Prompt element inside the loop, and include the relevant field values in the prompt. Here is what mine looked like:
    ---Here is one completed past coaching session details: Title: {!Loop_Sessions.Title__c}Preparation Notes: {!Loop_Sessions.Preparation_Notes__c}Session Notes: {!Loop_Sessions.Session_Notes__c}Duration (in hrs): {!Loop_Sessions.Duration__c}Status: {!Loop_Sessions.Status__c}---
      • Add another Add to Prompt element outside the loop before the end of your flow and add the following text. This is important for when no prior completed records can be found.
    There are no other completed coaching sessions found for this client.
      • Save, debug and activate your flow.

    This is what your flow should look like.

    Writing the Prompt Template Instructions

    Important things to watch for:

          • There are effective reusable instructions you will want to save for the future. Therefore, create a text document and save general instructions and guard rails into this document. You can paste these into your future templates later.
          • Start with, “Your job is,” and give the LLM a job description.
          • Continue with context to ground. Drop in relevant field values here. Remember to pick your field values using the resource picker.
          • Then give the template a summary of the past history from the flow. You should be able to see your flow in your resource picker. If you cannot, you may need to save, deactivate, reactivate, try a few things. I had difficulty getting this done to be honest.
          • The large language model does not understand checkbox values or picklists automatically. You need to define and explain what these values mean, and what actions need to be taken.
          • As you write these, be sure to set clear guidelines and include both what to do and what to avoid. Additionally, specify how the response should be structured to keep everything consistent and easy to follow. For example:
              • We are writing for a Salesforce Rich Text Field, so you’ll need to follow this guidance below:
                    • Provide output in HTML format. Only Output the text inside the tag pair.
                    • Use bullets or numbered lists to highlight key points.
                    • Enhance readability by adding rich text formatting (i.e., font, text color, background color, etc.) where possible.
                    • Add HTML tags to improve readability.

    Save your prompt template and test it. Save a new version when needed. You will need to point the prompt template to an existing record in Salesforce, therefore create a coaching session record before you test. Read the resolution and response carefully. Make iterations and improve the template. Save and activate your work.

    Field Generation Prompt Template Content

    Here is my complete prompt template content. You can copy and paste this to get started. Remember to delete the inputs, and insert them from your resource picker for accuracy.

    Your Job is to provide a note and guidance for the Coach on what they need to know for an upcoming Coaching Session. This response will be used to populate the session preparation notes field on the Coaching Session record.Use the following context to ground your final responseContext:Client's Name: {!$Input:Coaching_Session__c.Contact__r.Name}Client's Birthday: {!$Input:Coaching_Session__c.Contact__r.Birthdate}Client's Next Scheduled Session Date: {!$Input:Coaching_Session__c.Contact__r.Next_Scheduled_Session__c}Is Client a Subscriber: They are if the value of  is true.Client's Subscriber Type: {!$Input:Coaching_Session__c.Contact__r.Subscriber_Type__c}Client's Interest Areas: {!$Input:Coaching_Session__c.Contact__r.Interests__c}Recent coaching sessions (these are the recently completed coaching session notes separated by "---", extract past experience and interests): {!$Flow:Coaching_Session_PTF_Past_Completed_PT_Flow.Prompt}Note on Subscriber Types:There are three types: "Career", "Personal" and "General"- "Career" focuses on progress and advancement at the work place- "Personal" focuses on wellbeing and happiness outside of work.- "General" is a healthy mix of the two other types.Note on Session Types:There are 2 types of Session Types Inaugural (first) and Follow-up sessions. Specify which type is relevant and the duration of the session. If there are one or more than one completed past coaching session records, then the Follow-up session type will be used, otherwise Inaugurual will be set up.- Inaugural: If the client does not have any previous consultation on record, then this format will be used. This coaching session will be guided toward setting up a plan with the client. Discussions should be centered around making sure they have the proper readiness to start, discussing their interests, discussing their past experience, providing them with a few guidance tips on getting started, and telling them what to expect. Usually go for 1.5 hour. A follow-up is usually scheduled for one month after the first one (this session).- Follow-up: These sessions are monthly check-ins with the customers to review their progress and answer questions for them based on what they are dealing with. Usually go for 1 hour.Note on birthdays: Customers with upcoming and past birthdays, within 30 days of the next session scheduled date ( {!$Input:Coaching_Session__c.Contact__r.Next_Scheduled_Session__c}) will get a free digital book. Please add this detail into the notes if the birthdate is within 30 days of the next scheduled session. If no birthday is provided on the client contact record, or the birthdate is not within 30 days of the next scheduled session do not include this information.- If {!$Input:Coaching_Session__c.Contact__r.Is_Subscriber__c} value is false then the client is not subscribed. Warn the coach: Include a message saying confirm subscription before visit. If the client is subscribed (value = true) do not include a message for the coach about this.Response:- Your response should give the coach information to go into the consultation and know what to discuss with the customer.- Your response should contain important information about the customer, their interests, structure on how the coaching session should go, guidance for their client based on interests and past experience.- Use break-lines in your response to segment between different ideas- We are writing for a Salesforce Rich Text Field, so you'll need to follow this guidance below:- Provide output in HTML format. Only Output the text inside the tag pair.- Use bullets or numbered lists to highlight key points.- Enhance readability by adding rich text formatting (i.e., font, text color, background color, etc.) where possible.- Add HTML tags to improve readability

    Adding the Prompt Template to the Page Layout

    Once the prompt template is finished, you need to add the prompt template to the page layout. Go to the object lightning page layout, activate dynamic forms if not already active, pick the preparation notes field and link it to the prompt template you have created.

    Field Generation Result

    Here is the result of the field generation prompt template on the record.

    Flex Prompt Template

    Flex prompt template is a more powerful tool. You can set it up using the prompt text you have already created. Flex prompt template can accept up to five different inputs of various different types. They can be built into Agentforce actions and used by Agentforce topics.

    Currently, prompt template inputs cannot be changed once configured. You will need to create a new prompt template if you want to change inputs. Therefore, always add at least one free text input when configuring a flex prompt template. This will give you the flexibility you need, if you need to add additional instructions or data to your prompt.

    Object Preparation and Flex Prompt Template Creation

    The input types you can use, when creating a flex prompt template are:

    • Object
    • Free Text
    • Data Model Object

    For this template type, you can input any object you want. They don’t have to be related. You need to makes sure you create you object before you build your template, if you are working with custom objects.

    You also don’t specify fields when you build a flex template. Your template will have access to all the fields on the record that will be inputted into the template.

    Flex Template Inputs

    For this use case we will use three input variables:

    • Coaching Session Id
    • Client Contact Id
    • Free Text Input

    We could use just the coaching session Id here, but the additional inputs will give us flexibility in how we process different scenarios. For example: We may decide to use this prompt template even before the first session is scheduled, therefore prompt template could work just by receiving the client contact Id.

    Reuse Template Triggered Flow Input

    I found that I can reuse the same flow I created for the field generation prompt template. This flow should be accessible for you in your resource picker, when you are building your prompt.

    Writing Flex Prompt Template Instructions

    The prompt template content for this will be very similar to the field generation example. The main difference will be that you will refer to three inputs and the flow input for this template type. You can start with your own template from the previous example and modify it to fit the needs of the flex template. Or you can use my content provided in the code block below to get started.

    Once you finish your first pass, save your prompt template and test it. Save a new version when needed. You will need to point the prompt template to existing records in Salesforce, therefore create a coaching session record before you test. You can add additional data and instructions in the free text input field. Read the resolution and response carefully. Make iterations and improve the template. Save and activate your work.

    Flex Prompt Template Content

    Here is my complete prompt template content. You can copy and paste this to get started. Remember to delete the inputs, and insert them from your resource picker for accuracy.

    Your Job is to provide a note and guidance for the Coach on what they need to know for an upcoming Coaching Session.Use the following context to ground your final responseContext:Client's Name: {!$Input:ClientIdVar.Name}Client's Birthday: {!$Input:ClientIdVar.Birthdate}Client's Next Scheduled Session Date: {!$Input:ClientIdVar.Next_Scheduled_Session__c}Is Client a Subscriber: They are if the value of {!$Input:ClientIdVar.Is_Subscriber__c} is true.Client's Subscriber Type: {!$Input:ClientIdVar.Subscriber_Type__c}Client's Interest Areas: {!$Input:ClientIdVar.Interests__c}Recent coaching sessions (these are the recent completed coaching sessions that will give you interest and past experience information):{!$Flow:Coaching_Session_PTF_Past_Completed_PT_Flow.Prompt}Note on Subscriber Types:There are three types: "Career", "Personal" and "General"- "Career" focuses on progress and advancement at the work place- "Personal" focuses on wellbeing and happiness outside of work.- "General" is a healthy mix of the two other types.Note on Session Types:There are 2 types of Session Types Inaugural (first) and Follow-up sessions. If there are one or more than one completed past coaching session records, then the Follow-up session type will be used, otherwise Inaugural will be set up.- Inaugural: If the client does not have any previous consultation on record, then this format will be used. This coaching session will be guided toward setting up a plan with the client. Discussions should be centered around making sure they have the proper readiness to start, discussing their interests, discussing their past experience, providing them with a few guidance tips on getting started, and telling them what to expect. Usually go for 1.5 hour. A follow-up is usually scheduled for one month after the first one (this session).- Follow-up: These sessions are monthly check-ins with the customers to review their progress and answer questions for them based on what they are dealing with. Usually go for 1 hour.Note on birthdays: Customers with upcoming and past birthdays, within 30 days of the next session scheduled date ( {!$Input:ClientIdVar.Next_Scheduled_Session__c} ) will get a free digital book. Please add this detail into the notes if the birthdate is within 30 days of the next scheduled session. If no birthday is provided on the client contact record, or the birthdate is not within 30 days of the next scheduled session do not include this information.Note on Recent Coaching Session records: The recent orders should contain client's interests and past experience.If there are additional notes, they will be provided in {!$Input:FreeTextVar}. Please take these into consideration.Response:- If {!$Input:ClientIdVar.Is_Subscriber__c} value is false then the client is not subscribed. Warn the coach: Include a message saying confirm subscription before visit. If the client is subscribed (value = true) do not include a message for the coach about this.- Your response should give the coach information to go into the consultation and know what to discuss with the customer.- Your response should contain important information about the customer, their interests, structure on how the coaching session should go, guidance for their client based on interests and past experience.- Use break-lines in your response to segment between different ideas- We are writing for a Salesforce Rich Text Field, so you'll need to follow this guidance below:- Provide output in HTML format. Only Output the text inside the tag pair.- Use bullets or numbered lists to highlight key points.- Enhance readability by adding rich text formatting (i.e., font, text color, background color, etc.) where possible.- Add HTML tags to improve readability

    Adding Your Flex Prompt Template to Agentforce

    I will cover this topic in an upcoming post.

    Conclusion

    As artificial intelligence continues to redefine the landscape of business processes, tools like Salesforce’s Prompt Builder stand at the forefront of this technological revolution. The insights provided by Jaswinder Rattanpal at TrailblazerDX 2025 inspired this demo and the post. There is no doubt everyone needs to master prompt engineering to prepare for the AI future.

    For Salesforce users and enthusiasts looking to discover more about the capabilities of Prompt Builder, the platform provides comprehensive learning and support through Trailhead, hands-on workshops, and a vibrant community forum. Finally, remember to check back at Salesforce Break for additional posts on Prompt Templates and Agentforce AI.

    Explore related content:

    Salesforce AI: Transforming Data into Sales Engagement

    New Agentforce Specialist Certification

    How to Get Your AI-Powered Enhanced Developer Org with Agentforce and Data Cloud

    #Agentforce #AI #AISpecialist #Einstein #PromptBuilder #Salesforce #SalesforceAdmins #SalesforceDevelopers

    How To Build Flex and Field Generation Prompt Templates in the Prompt BuilderTemplate-Triggered Prompt Flow Start ElementTemplate-Triggered Prompt Flow
    2025-03-31

    Can You Start With a Decision Inside Your Record-Triggered Flow?

    When you build a record-triggered Salesforce flow, you may need to use the decision element to differentiate the path your flow takes based on conditions. Before we dive further into best practices, let’s take a look at what the decision element in flow is and what it does.

    Decision Element

    Salesforce Flow Decision Element is a logic component used in Flow Builder to route your flow’s path based on specific conditions. Think of it like an “if/else” or “switch” statement in programming—it lets you control what happens next depending on the data or situation.

    What It Does: The Decision Element evaluates data from the flow (like record fields, variables, or formulas) and then directs the flow down a specific outcome path depending on which conditions are met.

    Key Parts of a Decision Element:

    • Label & API Name: A name for easy reference.
    • Outcome(s): These are the different paths the flow can take. Each outcome has a Label and Condition.
    • Default Outcome – A fallback path if none of the other outcomes match.

    🚨 Use Case 👇🏼

    Let’s say you have a flow that processes a case. You can use a Decision Element to check the case priority: Outcome 1 will send the case to the Escalation Queue, if Case Priority = High. Outcome 2 will send the case to the Standard Queue If Case Priority = Medium or Low. The Default Outcome will log an error, send a notification or do nothing.

    Record-Triggered Flow

    A record-triggered flow is a type of Salesforce Flow that automatically runs when a record is created, updated, or deleted. It’s often used to automate tasks like updating related records or sending notifications.

    Record-Triggered flows are subject to a popular debate: Experts have varying recommendations on how many record-triggered flows are optimal on a given Salesforce object (e.g. Case).

    The best practice recommendation and the anti-pattern definition here will be highly dependent on this approach.

    One Record-Triggered Flow or Many on a Single Object?

    My recommendation is that you can and should have multiple record-triggered flows on a single object, if you can optimize the start conditions to ensure only a small subset of the record-triggered automation logic is executed for a particular record. If you try to combine all the logic into a single flow, you will not be able to tighten your start element conditions effectively, instead you will resort to using decisions to differentiate paths inside the flow.

    Why Starting Your Record-Triggered Flow Can Be an Anti-pattern?

    Considering all these factors listed above, using a decision that follows your record-triggered flow start element most likely points to an inefficient design: You use decision outcomes to differentiate business logic, when you could separate these paths into multiple record-triggered flows and add these conditions to the start element.

    Let me explain.

    🚨 Use case 👇🏼

    You have three record types for the case object representing Hardware, Software and Other. You have a fairly sophisticated record-triggered flow to process new Cases. For the Other record type your flow does not do anything.

    If the solution in this case is to use a decision element inside your flow connected to the start element to check the record type, then you should consider separating this record-triggered flow into three different flows.

    And if some of the business logic repeats for more than one record type, you should consider leveraging subflows.

    Why Is This Design More Efficient?

    There is a system overhead associated with starting a flow execution. If you execute your flow and evaluate conditions in the decision element, you already used cloud resources. If you can stop your flow from executing, you won’t use any resources. Start element conditions stop the flow from executing.

    Can you achieve this just by tightening your start element conditions and excluding the Other record type? Sure, you can. Is the use case always this straightforward. Not, really. For more complex use cases, it may make sense for you to split your flow into multiple record-triggered flows.

    Conclusion

    While using a decision right after your start element may make sense in certain situations, it is a potential anti-pattern you should watch for. If this is a flow somebody else built, or you built ages ago, seeing this pattern should prompt you to inspect and reevaluate the design in this flow.

    Leveraging multiple record-triggered flows on a single object with the support of flow trigger explorer and the execution order setting can be a very good idea in these situations. If your business logic repeats in your triggered automation, you should consider leveraging subflows for easier maintenance.

    This post is part of our Best Practices series! Click HERE to see the rest of the posts.

    Explore related content:

    A Comparative Look at Flow Decision Elements in Salesforce

    How to Use the Action Button Component in Screen Flow

    Start Autolaunched Flow Approvals From A Button

    Error Screen Message Design in Screen Flows

    #Automation #RecordTriggered #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials
    Decision element record-triggered flow salesforceDecision element record-triggered flow salesforce
    2025-03-25

    Stop Agentforce Dev Orgs From Expiring

    Have you ever discovered—often too late—that the Salesforce Dev Org you worked so hard to set up has suddenly expired? You spend hours or even weeks configuring a demo, exploring new features, or practicing your development skills, only to log in one day and be greeted by an expiration notice. It happens more frequently than we realize, especially given that new specialty and trial orgs sometimes have shorter lifespans. A one-month trial here, a week-long environment there—before you know it, all that effort can vanish without warning.

    Dev Orgs Don’t Last As Long As They Did In The Past

    In the past, standard Salesforce developer orgs might have lasted six months or even a year. Then came specialized developer orgs for different industries, along with new trial orgs that offered limited time frames. Eventually, Data Cloud and now Agentforce Dev Orgs arrived. While these are fantastic for exploring cutting-edge features—like real-time data capabilities and new AI solutions—they usually come with a much shorter shelf life. Salesforce has recently announced new Agentforce Data Cloud developer orgs that expire after 45 days. It is indeed better than just a week, but the clock is still ticking.

    Salesforce does send notification emails to remind users that their developer orgs are nearing expiration. However, these alerts can sometimes blend in with other automated messages, or they may land in spam folders. Whether you are a seasoned developer juggling multiple environments or a newcomer trying to learn the platform, it can be easy to lose track of which org is about to expire and which is still active.

    The Challenge: Tracking Expiration Before It’s Too Late

    Consider how critical these development and testing environments can be. They allow you to:

    • Build proof-of-concept solutions without risking a live production org.
    • Experiment with new Salesforce features or packages in a controlled setting.
    • Demo functionality to colleagues, stakeholders, or potential clients.

    If one of these invaluable environments suddenly expires, you may lose crucial configurations, data setups, and more. The result is wasted time and frustration as you scramble to recreate everything in a fresh org.

    While the standard practice is to keep track of all login details in a password manager or a spreadsheet, not everyone is consistent with that. Even if you meticulously store your credentials, you might not remember to log in regularly to reset the expiration countdown. That is exactly why an automated reminder can save you from this common pitfall.

    Introducing a Schedule-Triggered Flow to Send Automatic Warnings

    The solution is surprisingly straightforward: create a scheduled trigger flow in Salesforce that checks for inactivity in your Agentforce or Data Cloud developer org. If the system sees that you have not logged in for a certain number of days (in this case, 40 days), it sends you an email alert. This way, you will have time to log in and reset the expiration clock before day 45 sneaks up on you.

    You can build this flow yourself fairly easily, or you can install an unmanaged package I have created. This package contains a flow that runs each night to review login history. If the flow sees that your last login is older than the set threshold, it pings your email to remind you to hop back in. This single step could spare you hours of rebuilding or, worse, losing all the work you have done.

    Below is a walkthrough of how to set up this scheduled flow. You will also find tips for customizing it to your specific username if you have changed it from the default (which normally ends with @agentforce.com).

    Step 1: Confirm Your New Developer Org’s Basics
    First, head to the Setup menu. Check your username and confirm that it ends with @agentforce.com. Confirm that your email address is accurate. You can also check that the Org URL includes “org farm,” indicative of the new Agentforce and Data Cloud Dev Orgs.

    Agentforce and Data Cloud developer orgs often come with constraints—particularly around how many AI requests you can make. That said, for most learning or exploration scenarios, these limits will not be prohibitive.

    Step 2: Build or Install the Flow
    If you opt to build the flow from scratch, you will create a Scheduled Trigger Flow that runs once daily. Set it to run near midnight to ensure it checks your org’s login activity for that day.

    In this flow, you will do the following:

    1. Define a constant (e.g., usernameSearchString) that points to any user whose username ends with @agentforce.com.
    2. Query the User object to find any records whose Username matches that constant. In most new developer orgs, there should only be one match.
    3. Retrieve the LoginHistory for that user, sorting records in descending order by login time. This step ensures you get the most recent login date.
    4. Compare that login date to the current date minus 40 days. If the last login is older than 40 days, proceed to send an email alert. If it is not, the flow does nothing.

    If you have changed your default username, you will need to adjust your constant to match the portion that identifies your user. Otherwise, the out-of-the-box approach (i.e., checking for @agentforce.com) will work fine.

    Step 3: Configure the Email Alert
    Within the flow, you will set an Action to send an email. This email typically goes to the address tied to the user record (the one you provided when creating the developer org). The email is straightforward but crucial. You might specify a subject line like “Warning: Your Developer Org Will Expire Soon” or something equally attention-grabbing.

    For the email body, you can create a Text Template. Include pertinent details such as the last login date, the username, and a short explanation that logging in again will reset the expiration countdown. Include enough information that you can quickly locate the correct credentials in your password manager. A typical text template might read:

    Subject:
    Warning – Action Needed! Developer Org Nearing Expiration

    Body:
    Hello, This is an automated reminder that your Agentforce developer org, associated with username {!User.Username}, is nearing its 45-day expiration window. Your last login was {!LoginHistory.LoginTime}. To prevent expiration, please log in as soon as possible.

    Thank you!

    You can, of course, modify the text to your preference. The main goal is to ensure you see this email and know exactly which org requires attention.

    Step 4: Activate and Confirm the Flow
    When you install the flow template from the unmanaged package, it might come pre-activated. If you build one from scratch, remember to Activate it. Then, check under Scheduled Jobs in Setup to confirm the flow is listed to run nightly.

    If everything is configured correctly, you will receive an email once your last login date surpasses 40 days. At that point, you know you have a five-day grace period before the 45-day expiration hits—plenty of time to jump back into the org and keep it alive.

    Step 5: Enjoy Peace of Mind
    That is it! No more frantic searches through your password vault only to discover your demo is gone. The next time you spin up a new developer org for Data Cloud exploration, build this simple safety net. You will save yourself the frustration of expired orgs and lost work.

    Install the Unmanaged Package to Leverage the Scheduled Flow

    The link for the unmanaged package is as follows:

    https://login.salesforce.com/packaging/installPackage.apexp?p0=04tgK0000000TOb

    Bonus: How To Add the Org URL to the Email

    One of our YouTube subscribers asked whether the Org URL can be added to the email. This is a very good suggestion, and I am planning on adding it to the unmanaged package in the future.

    If you want to add the URL, please follow these instructions:

    1. Create a URLFormula resource. Your formula will read as follows:LEFT($Api.Partner_Server_URL_630, FIND( '/services', $Api.Partner_Server_URL_630))
    2. Go to your email body Text Template and view source (view as Plain Text), and replace the username resource {!Get_User.Username} with the following:
      <a href="{!URLFormula}">{!Get_User.Username}</a>

    This will make the username value on the email clickable. Once the user clicks on the username, they will be taken to the login page for the Org.

    Final Thoughts on Agentforce Data Cloud Developer Orgs

    These new developer environments are wonderful for trying out the latest Salesforce innovations. You can test specialized features that may not be available in a more traditional developer edition. The big trade-off is their comparatively short expiration period. Forty-five days is enough for many use cases, yet it is still short enough that forgetting just one login can lead to losing your entire environment.

    With the scheduled trigger flow approach, you can have the best of both worlds: a flexible, feature-rich developer org with a relatively short lifespan, backed by a reliable reminder system preventing accidental expirations.

    Watch the video on YouTube for additional details 👇

    Explore related content:

    Salesforce Summer ’25 Preview: Major Flow Changes to Watch For

    How to Get Your AI-Powered Enhanced Developer Org with Agentforce and Data Cloud

    TDX 2025 News: Salesforce Agentforce 2dx and AgentExchange

    My Inaugural Agentforce-Assisted Flow Building Experience

    #Agentforce #DataCloud #DevOrg #Salesforce #SalesforceAdmins #SalesforceDevelopers

    Stop Agentforce Dev Orgs from Expiring

    Client Info

    Server: https://mastodon.social
    Version: 2025.07
    Repository: https://github.com/cyevgeniy/lmst