Sunday, June 29, 2025

Mastering Custom Retry and Delay Logic in n8n

One of the most common challenges when building a workflow is dealing with external APIs. Services often have rate limits, temporary outages, or other issues that can cause your workflow to fail. While n8n offers a built-in retry mechanism on most nodes, its options are limited. This guide will walk you through how to build a robust, custom retry and delay system that gives you complete control over your workflow's behavior.

The Limitations of n8n's Default Retry Settings

Most nodes in n8n have a "Settings" tab where you can configure basic retry behavior. You can specify a maximum number of tries and a fixed delay between them.

The default retry settings in an n8n node's "Settings" tab, showing options for "Maximum Tries" and "Delay Between Tries."

However, the maximum delay is capped at 5,000 milliseconds (5 seconds), and the maximum number of tries is limited to 5. This simply isn't enough for many real-world scenarios. For example, if an API has a strict 60-second throttling period, a 5-second delay won't help you at all. To handle these situations, you need to build your own logic.

Building a Custom Retry Loop: Step-by-Step

Our custom solution will use a few core n8n nodes to create a flexible loop that retries a failed node with a custom delay and for a custom number of times. Let's build it together.

Step 1: Initialize Your Retry Variables with a Set Node

First, we need to define the rules for our retries. We'll use a Set node to create and store these variables at the beginning of the workflow.

  • Node Name: Set Fields
  • Key: max_tries
  • Value: A number representing the total number of attempts you want to make (e.g., 6).
  • Key: delay_seconds
  • Value: A number representing the initial delay in seconds (e.g., 30 for a 30-second delay).

This initial Set node acts as the control panel for our retry logic, making it easy to change the settings later without modifying the rest of the workflow.

The Set Fields node configured with two keys: max_tries and delay_seconds with their respective initial values.

Step 2: Configure Your Target Node to Handle Errors

Next, add the node that might fail—in this example, an HTTP Request node. This could be any node that calls an external service. The critical part here is to tell the node what to do when it encounters an error.

An HTTP Request node, which is the target node for the custom retry logic.

  • In the Settings tab of your HTTP Request node, find the On Error dropdown.
  • Change the setting to Continue (using error output).

The "Settings" tab of an HTTP Request node with the "On Error" dropdown set to Continue (using error output).

This setting is essential. Instead of stopping the workflow, it will send the failed data to a new output branch, allowing us to process the error and decide whether to retry.

Step 3: Decrement the Retry Counter

When the target node fails, the data will flow to its "Error" output. We will use the max_tries variable as our counter and decrement it with each failed attempt. We also need to keep the delay_seconds variable in our data to be used later in the loop.

  • Connect another Set node to the Error output of your target node.
  • Node Name: Edit Fields
  • Key: delay_seconds
  • Value: {{$json.delay_seconds}}. This ensures the delay value is carried through the loop without being changed.
  • Key: max_tries
  • Value: Set this value using the expression: {{$json.max_tries - 1}}. This expression simply subtracts 1 from the current value of max_tries with each iteration, tracking our remaining attempts.
The Edit Fields node showing the expression {{$json.max_tries - 1}} for the max_tries key and the expression {{$json.delay_seconds}} for the delay_seconds key.

Step 4: Check if Tries are Remaining with an If Node

Now that we've decremented the counter, we need to check if we should continue trying.

  • Connect an If node to the Edit Fields node.
  • Condition: left value should be {{$json.max_tries}}, Operation should be Is Less Than or Equal, and right value should be 0.

The If node configured to check the condition where the max_tries value is less than or equal to 0.

This If node will have two outputs:

  • True: This path is followed if max_tries is 0 or less. This means we are out of tries, and the workflow should handle the final failure.
  • False: This path is followed if max_tries is greater than 0. This means we still have attempts left.

Step 5: The Delay and Loop Back

This is where the magic happens! We'll use a Wait node to pause the workflow before trying again.

  • Connect a Wait node to the False output of the If node (the path where we still have attempts remaining).
  • Wait Amount: Set this to {{$json.delay_seconds}}. This is the dynamic value we defined in our first Set node.
  • Wait Unit: Seconds

Finally, connect the output of this Wait node back to the input of your target HTTP Request node. This creates a loop. When a failure occurs and we have retries left, the workflow waits for the specified delay and then re-executes the target node.

A screenshot of the complete n8n workflow, including the loop connecting the Wait node back to the target HTTP Request node.

The Wait node with "Wait Amount" set to {{$json.delay_seconds}} and "Wait Unit" set to "Seconds."

Advanced Logic: Exponential Backoff

The basic loop uses a fixed delay, but you can make it even smarter. A powerful technique called exponential backoff can prevent you from overloading a server by increasing the delay with each failed attempt.

To implement this, simply modify your Edit Fields node in Step 3. The delay_seconds key's value would be updated like this:

  • Key: delay_seconds
  • Value: {{$json.delay_seconds * 2}}

Now, each time the loop runs, the delay will double. This makes your workflow more polite to the external API and gives the service more time to recover.

What Happens on Success and Final Failure?

  • Success: When the target node finally succeeds, it will no longer send data to the "Error" output. Instead, the data will continue down the "Success" output branch, and the retry loop will be bypassed entirely.
  • Final Failure: If all tries are exhausted, the If node will send the data down its True output (where max_tries is not greater than 0). You can connect a Stop And Error node here to halt the workflow or use a notification node to send an email or a Slack message, alerting you that the task ultimately failed.

By building this custom loop, you gain the flexibility to handle a wide range of API and service issues, making your workflows more resilient and reliable.

Let's execute the workflow to check this approach.

A screenshot of the n8n workflow's execution history, displaying that the target node was run multiple times as defined by the custom retry logic.

You'll see that the target node is called exactly the number of times specified in the initial Set node, confirming the reliability and control of this custom retry mechanism.

An n8n template for this approach can be found here.

Saturday, June 21, 2025

Elevate Your n8n Workflows: Google Sheets as Your Intuitive UI 🚀

As you dive deeper into automating with n8n, you'll often find yourself needing more than just simple data processing. Imagine a scenario where you want to:

  • Store input data in a structured, tabular format that's easy to review and manage.
  • Execute your n8n workflow only on specific, relevant parts of your dataset.
  • Capture new fields generated by your n8n workflow's output, neatly organized.

In essence, you're looking for a user interface (UI) for your n8n workflows. While dedicated UIs can be complex to build, Google Sheets offers a remarkably popular, accessible, and straightforward solution. Its seamless integration with n8n makes it an ideal candidate for managing your workflow's inputs and outputs.

Let's walk through a practical example to demonstrate how you can leverage Google Sheets as an effective UI for your n8n automations.

Setting Up Your Google Sheet: The Foundation of Your UI

First things first, we need a Google Sheet to serve as our interface. We'll create a simple sheet with three key fields: Color, Status, and Number.

  • Color: This column will hold our primary input data – in this case, just a color name (e.g., "Red", "Blue", "Green").
  • Status: This is a crucial control field. It will indicate whether a row is READY for processing by our n8n workflow or if it has already been DONE. This allows us to selectively process data.
  • Number: This column will be our output field. We'll use n8n to calculate the string length of the Color name and populate this column.

A screenshot of a Google Sheet with columns for Color, Status, and Number, and sample data populated.

By using these fields, Color acts as our input, Number as our output, and Status provides the necessary control for sequential processing and clear visibility into what's been completed.

Building Your n8n Workflow: Step-by-Step Automation

Now, let's switch over to n8n and configure our workflow to interact with this Google Sheet.

Step 1: Reading Data with the Google Sheets NodeThe

Our first step in n8n is to read the data from our Google Sheet. We'll use the Google Sheets node for this. Configure it to read all rows where the Status column is set to READY. This ensures that our workflow only processes new or unhandled items.

A screenshot of the n8n workflow with the Google Sheets node configured to read rows, filtered by Status equals READY.

Once you execute this node, you'll see your tabular data from the Google Sheet, specifically filtered to show only the rows marked READY. This immediate feedback confirms your connection and filter are working correctly.

The output of the Google Sheets node in n8n, showing only the data rows where Status is READY.

Step 2: Processing Data with Loop and Set Nodes

Next, we'll add a Loop node followed by a Set node within that loop. The Loop node is essential because it allows us to process each READY row individually.

A screenshot of the n8n workflow showing a Loop node and a Set node nested inside it.

Inside the Set node, we'll perform two important actions:

  1. Pass row_number to output: It's vital to retain the original row_number from the Google Sheet. This number will be used later to update the correct row.

  2. Calculate and return the length of the Color name: This will be the value we write back into the Number column in our Google Sheet.

  3. Update Status to DONE: After processing a row, we'll change its Status to DONE. This prevents the same row from being processed again in subsequent workflow executions.

A screenshot of the n8n Set node's configuration, showing the expression to get the row_number and the expression to calculate the length of the Color string, along with the Status value being set to DONE.

Step 3: Updating Rows with the Google Sheets Node (Again!)

Finally, we'll add another Google Sheets node, but this time, its purpose is to update the rows we've just processed.

A screenshot of the n8n workflow with a second Google Sheets node added, configured to perform an 'Update' operation.

n8n is quite smart here! It will automatically map the columns from your workflow's output to the corresponding columns in your Google Sheet. Crucially, it will use the row_number we passed through the Set node to match and update the correct row in your sheet.

A screenshot of the second Google Sheets node's update configuration, highlighting how it uses row_number to match and update the correct row.

Executing and Verifying Your Workflow

With all nodes configured, it's time to execute our n8n workflow and observe the magic!

The n8n output should look exactly as expected, showing the Number (length of the color name) and the Status updated to DONE for each processed row.

A screenshot of the final execution output in n8n, showing the processed data with the Number field populated and the Status updated to DONE.

Now, let's hop back to our Google Sheet itself to confirm the changes.

A screenshot of the Google Sheet, showing that the Number and Status columns have been updated by the n8n workflow.

Fantastic! The Google Sheet has been updated as well. The Number column now contains the calculated lengths, and the Status for those rows is DONE.

Controlling Your Workflow's Input

Here's the real power of this setup: If you execute your n8n workflow again right now, nothing will happen. Why? Because there are no more rows with the Status set to READY! This simple yet effective mechanism allows you to fully control which input values your n8n workflow processes. You can add new READY rows whenever you need to trigger the workflow for new data.

This approach not only provides a clear way to feed input values to your n8n workflow but also gives you immediate visibility into the output values and the processing status of each item.

You can access the Google Sheet used in this example here and find the n8n template here to get started quickly!

Sunday, June 15, 2025

How to Properly Test Your n8n Sub-workflows

Reusing node sequences in n8n is a powerful best practice, but testing these sub-workflows in isolation can be a challenge. This guide will show you how to build a robust, testable sub-workflow that is easier to manage and reuse. We'll then walk you through the essential steps to independently test your sub-workflows, ensuring your modular automations are both robust and reliable.

To begin, let's create a straightforward sub-workflow that we can use for our testing. The very first step in building any sub-workflow that will be called from a main workflow is to add the Execute Sub-workflow Trigger node. This node is the crucial entry point for your sub-workflow, serving as the "door" through which data and control flow from the parent workflow. It's designed to receive the incoming data and act as the starting point for all the subsequent nodes you will add, ensuring a seamless connection between your parent and child automations.

A screenshot of the n8n canvas showing a single 'Execute Sub-workflow Trigger' node, which is the entry point for the sub-workflow.

With the trigger node in place, the next crucial step is to define the inputs it will be expecting. This allows the main workflow to pass specific data—like a user's name or a piece of text—into the sub-workflow for processing. To do this, simply double click on the node and navigate to the 'Parameters' section. Here, you can define the exact data fields your sub-workflow needs to function correctly.

To set up these inputs, simply click the "Add field" button. You can then name your field (for example, "color") and assign a data type to it, such as a string.

A screenshot of the n8n 'Execute Sub-workflow Trigger' node's 'Parameters' section, showing a new text field for 'color' being defined.

Now that the input structure is defined, we face our main challenge: testing the sub-workflow. Because the Execute Sub-workflow Trigger node is designed to be called by another workflow, it cannot be executed on its own. To get around this and test our sub-workflow in isolation, we will add a Manual Trigger node. This allows us to manually initiate the workflow, providing a simple way to test the logic you build without needing to create and run a separate parent workflow every time.

To implement this, simply add a Manual Trigger node to your canvas. This node can be positioned anywhere on the canvas, but for clarity, it's best to place it to the side or above your main sub-workflow logic. This setup allows you to easily execute the workflow on demand with a click of a button.

A screenshot of the n8n canvas showing both the 'Execute Sub-workflow Trigger' node and a newly added 'Manual Trigger' node, arranged to allow for isolated testing of the sub-workflow.

Next, we'll introduce the key to making this testing setup work seamlessly: two Edit Fields (Set) nodes. The first, which we can call our 'Test Input' node, will be used to create the test input data that our Manual Trigger node will pass into the workflow. The second, which we'll call our 'Combine Input' node, will then act as the bridge, dynamically combining the output from either the Manual Trigger (for testing) or the Execute Sub-workflow Trigger (for live execution) into a single, consistent data format. This ensures that the rest of your sub-workflow logic receives the correct data no matter how it's initiated.

To set this up, connect the first Edit Fields (Set) node directly to your Manual Trigger node. This is where you'll define your static test data. Then, the second Edit Fields (Set) node will have two inputs: one coming from the first Edit Fields (Set) node and the other from the Execute Sub-workflow Trigger. This node is the final entry point for your sub-workflow's core logic.

A screenshot of the n8n canvas showing two 'Edit Fields (Set)' nodes. The first is connected to the 'Manual Trigger' node, and the second is connected to both the first 'Edit Fields (Set)' node and the 'Execute Sub-workflow Trigger' node.

Now, let's configure the first Edit Fields (Set) node, which we can call our Test Input node. This is where we will create the sample data needed to test our sub-workflow without a parent workflow. To do this, open the node and define the color field you set up earlier. Give it a test value, such as "blue" or "green." This step ensures that when you run the workflow manually, your sub-workflow will have data to process, just as it would in a real-world scenario.

For a visual reference, you can see how this Test Input node is configured in the screenshot below. It shows the 'color' field with the static value that will be used for testing.

A screenshot of the n8n 'Edit Fields (Set)' node named 'Test Input', showing the 'color' field set to a static value like 'blue' or 'green'.

Now, let's configure the second Edit Fields (Set) node, which serves as the entry point for your sub-workflow's core logic. The purpose of this node is to merge data from both triggers into a single, unified data structure. To do this, you must enable the Include Other Input Fields option, which will automatically bring in all the fields from the previous nodes. By doing so, you guarantee that all subsequent nodes will have access to the data they need, no matter how the workflow was initiated.

A screenshot of the n8n 'Combine Input' node's configuration, showing the 'Include Other Input Fields' option enabled.

Once all nodes are in place and configured, your sub-workflow is ready for isolated testing. The final setup should clearly show the two paths for data: one for development via the Manual Trigger and Test Input node, and the second for production from the Execute Sub-workflow Trigger.

Let's try to execute this n8n sub-workflow.

A screenshot of the n8n canvas showing a successfully finished workflow with green ticks on each executed node.

After you execute the workflow (by clicking the "Execute Workflow" button on the canvas), you can verify the results by clicking on the output of the Combine Input node: the color value should be blue.

A screenshot of the n8n canvas showing the output of the 'Combine Input' node. The output displays the test data, such as a 'color' field with the value 'blue'.

Because the Combine Input node provides a consistent data structure, you can confidently refer to the input values (like color) in all subsequent nodes, regardless of whether the sub-workflow was triggered for testing or live execution. Now let's look at an example of how you could use this consistent output in an If node to add conditional logic to your sub-workflow, using the color value from a previous node.

A screenshot of the n8n canvas showing an 'If' node connected after the 'Combine Input' node. The 'If' node is configured to check if the 'color' field value is equal to a specific string, such as 'blue'.

You can find a complete template of this reusable and independently testable sub-workflow here.