Quickstart for dbt and Salesforce Data Cloud
Introduction
In this quickstart guide, you'll learn how to use dbt with Salesforce Data Cloud using the new dbt Salesforce Data Cloud adapter (powered by dbt Fusion). You'll learn how to:
- Install Fusion and the Salesforce adapter
- Connect dbt or your local dbt environment to Salesforce Data Cloud
- Run and test dbt models using Data Cloud objects
- Understand key limitations and Data Cloud–specific model requirements
This adapter is currently in Alpha. It is not production-ready and should only be used in sandbox or test environments. Features, commands, and workflows are subject to change.
Prerequisites
- You have a dbt account or a local dbt Fusion setup.
- You have access to a Salesforce Data Cloud instance with API access and credentials (OAuth and service token).
- You have a code editor such as VS Code (or Cursor) with permissions to install extensions.
- You have a connected Git provider with repository access.
You’ll need the following Salesforce credentials:
- Client ID (Consumer Key) from your connected app secrets
- Private Key Path (the full file path to your downloaded
server.key) - Salesforce username
Related content
- Learn more with dbt Learn courses
- CI jobs
- Deploy jobs
- Job notifications
- Source freshness
Install Fusion
The Salesforce adapter is available through dbt Fusion, a next-generation dbt binary. Follow the installation steps for your platform.
macOS & Linux
curl -fsSL https://public.cdn.getdbt.com/fs/install/install.sh | sh -s -- --update
exec $SHELL
Windows (Powershell)
irm https://public.cdn.getdbt.com/fs/install/install.ps1 | iex
Start-Process powershell
To verify Fusion installation run
dbtf --version
Installing Fusion automatically provides the Salesforce adapter and its ADBC driver.
Install the dbt Extension for VS Code
- In VS Code, navigate to the Extensions tab.
- Search for dbt and locate the extension published by dbtLabsInc or dbt Labs Inc.
- Click Install.
- Once installed, look for the dbt Extension label in your VS Code status bar.
- Hover over it to see diagnostic info.
- Register and activate the extension.
After activation, the extension will automatically download the correct dbt Language Server for your OS. You can learn more about how to use the dbt VS Code extension in the dbt Extension documentation .
Set up a dbt Project
- Upload the provided dbt Salesforce Data Cloud example project (zip file) to your Git repository.
- Clone the repository locally through VS Code.
- Follow the instructions in the project’s README, starting from Step 3:
- Create a profiles.yml
- Confirm your connection
- Compile and execute the sample Jaffle Shop dbt project
- Run data tests to validate your models
You should now be able to run dbtf run and dbtf test successfully against your Salesforce Data Cloud instance.
Connection Configuration
Your profiles.yml file will look similar to the following:
salesforce_data_cloud:
target: dev
outputs:
dev:
type: salesforce_data_cloud
method: oauth
client_id: "<YOUR_CLIENT_ID>"
private_key_path: "<YOUR_PATH>/server.key"
user: "<YOUR_USERNAME>"
database: "<YOUR_DATA_SPACE>"
schema: "default"
threads: 4
Then run:
dbtf debug
If successful, you should see:
Connection test: OK connection
Execute Your First dbt Run
- Open your cloned dbt Salesforce project in VS Code.
- Run the following command in the terminal:
dbtf run --static-analysis off
This disables static analysis (required due to current Data Cloud limitations). If successful, your models will be built in Salesforce Data Cloud as Data Lake Objects (DLOs).
Known Limitations
| Feature | Timeline | Notes |
|---|---|---|
Running dbt run twice | Coming soon | Due to Data Cloud’s architecture, rerunning the same model is not currently supported. Manually delete dependencies before rerunning. |
| Materializations supported | — | Only table materializations are supported. Incremental, snapshot, view, and ephemeral are not. |
| Static Analysis | Ongoing | Must include --static-analysis off in every dbt command. Impacts column-level lineage and VS Code dbt buttons. |
| dbt Seeds | Coming soon | Not yet supported. |
| dbt Docs | N/A | Not currently available in Fusion for Data Cloud. |
| Arbitrary Queries | Not on roadmap | All queries must be tied to defined dbt sources. |
SELECT * | Coming | Standard metadata queries currently fail due to injected system columns. |
| Multi-dataspace writes | Coming soon | Future support for writing to multiple dataspaces. |
| Configurable timeout | Coming soon | Defaults to 5 minutes; not configurable yet. |
| Canceling dbt runs | Coming soon | Future support planned. |
VS Code Callouts
- The Problems table in VS Code may show namespace errors due to Data Cloud’s lack of schema support.
- Models must end with __dll. If omitted, it is appended automatically (for example, model_name → model_name__dll).
- Columns must end with __c. Omitting this suffix causes syntax errors.
- Model names cannot include double underscores (__) except before __dll.
- For category=profile, all models must include:
config:
category: profile
primary_key: id
Example: Simple Model in Salesforce Data Cloud
Create file in models folder
select
id__c as customer_id__c,
first_name__c,
last_name__c
from {{ source('jaffle_shop', 'customers') }}
Run the model:
dbtf run --models customers__dll --static-analysis off
You should see the Data Lake Object created successfully in your Salesforce Data Cloud environment.
FAQs
Add tests to your models
Adding data tests to a project helps validate that your models are working correctly.
To add data tests to your project:
-
Create a new YAML file in the
modelsdirectory, namedmodels/schema.yml -
Add the following contents to the file:
models/schema.ymlversion: 2
models:
- name: customers
columns:
- name: customer_id
data_tests:
- unique
- not_null
- name: stg_customers
columns:
- name: customer_id
data_tests:
- unique
- not_null
- name: stg_orders
columns:
- name: order_id
data_tests:
- unique
- not_null
- name: status
data_tests:
- accepted_values:
arguments: # available in v1.10.5 and higher. Older versions can set the <argument_name> as the top-level property.
values: ['placed', 'shipped', 'completed', 'return_pending', 'returned']
- name: customer_id
data_tests:
- not_null
- relationships:
arguments:
to: ref('stg_customers')
field: customer_id -
Run
dbt test, and confirm that all your tests passed.
When you run dbt test, dbt iterates through your YAML files, and constructs a query for each test. Each query will return the number of records that fail the test. If this number is 0, then the test is successful.
FAQs
Document your models
Adding documentation to your project allows you to describe your models in rich detail, and share that information with your team. Here, we're going to add some basic documentation to our project.
-
Update your
models/schema.ymlfile to include some descriptions, such as those below.models/schema.ymlversion: 2
models:
- name: customers
description: One record per customer
columns:
- name: customer_id
description: Primary key
data_tests:
- unique
- not_null
- name: first_order_date
description: NULL when a customer has not yet placed an order.
- name: stg_customers
description: This model cleans up customer data
columns:
- name: customer_id
description: Primary key
data_tests:
- unique
- not_null
- name: stg_orders
description: This model cleans up order data
columns:
- name: order_id
description: Primary key
data_tests:
- unique
- not_null
- name: status
data_tests:
- accepted_values:
arguments: # available in v1.10.5 and higher. Older versions can set the <argument_name> as the top-level property.
values: ['placed', 'shipped', 'completed', 'return_pending', 'returned']
- name: customer_id
data_tests:
- not_null
- relationships:
arguments:
to: ref('stg_customers')
field: customer_id -
Run
dbt docs generateto generate the documentation for your project. dbt introspects your project and your warehouse to generate a JSON file with rich documentation about your project.
- Click the book icon in the Develop interface to launch documentation in a new tab.
FAQs
Commit your changes
Now that you've built your customer model, you need to commit the changes you made to the project so that the repository has your latest code.
If you edited directly in the protected primary branch:
- Click the Commit and sync git button. This action prepares your changes for commit.
- A modal titled Commit to a new branch will appear.
- In the modal window, name your new branch
add-customers-model. This branches off from your primary branch with your new changes. - Add a commit message, such as "Add customers model, tests, docs" and and commit your changes.
- Click Merge this branch to main to add these changes to the main branch on your repo.
If you created a new branch before editing:
- Since you already branched out of the primary protected branch, go to Version Control on the left.
- Click Commit and sync to add a message.
- Add a commit message, such as "Add customers model, tests, docs."
- Click Merge this branch to main to add these changes to the main branch on your repo.
Deploy dbt
Use dbt's Scheduler to deploy your production jobs confidently and build observability into your processes. You'll learn to create a deployment environment and run a job in the following steps.
Create a deployment environment
- From the main menu, go to Orchestration > Environments.
- Click Create environment.
- In the Name field, write the name of your deployment environment. For example, "Production."
- In the dbt Version field, select the latest version from the dropdown.
- Under Deployment connection, enter the name of the dataset you want to use as the target, such as "Analytics". This will allow dbt to build and work with that dataset. For some data warehouses, the target dataset may be referred to as a "schema".
- Click Save.
Create and run a job
Jobs are a set of dbt commands that you want to run on a schedule. For example, dbt build.
As the jaffle_shop business gains more customers, and those customers create more orders, you will see more records added to your source data. Because you materialized the customers model as a table, you'll need to periodically rebuild your table to ensure that the data stays up-to-date. This update will happen when you run a job.
- After creating your deployment environment, you should be directed to the page for a new environment. If not, select Orchestration from the main menu, then click Jobs.
- Click Create job > Deploy job.
- Provide a job name (for example, "Production run") and select the environment you just created.
- Scroll down to the Execution settings section.
- Under Commands, add this command as part of your job if you don't see it:
dbt build
- Select the Generate docs on run option to automatically generate updated project docs each time your job runs.
- For this exercise, do not set a schedule for your project to run — while your organization's project should run regularly, there's no need to run this example project on a schedule. Scheduling a job is sometimes referred to as deploying a project.
- Click Save, then click Run now to run your job.
- Click the run and watch its progress under Run summary.
- Once the run is complete, click View Documentation to see the docs for your project.
Congratulations 🎉! You've just deployed your first dbt project!
FAQs
Was this page helpful?
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.