Saturday, August 2, 2025

From Binary to Base64: A Guide to File Encoding in n8n Workflows

Have you ever tried to send a file to an API and found that it won't accept a standard file upload? Some services require the file to be converted into a Base64 encoded string first. This can be a tricky task if you're not familiar with it. This post will guide you through the process of handling these scenarios in n8n, making file uploads to any API a breeze, regardless of the format it demands.

For the simplest cases, where you have a single file that needs to be encoded, n8n provides a dedicated solution. The "Extract From File" node is a perfect tool for this. It allows you to take a binary file and, with a simple operation setting, convert its content directly into a Base64 string that you can then use in your next node, such as an HTTP Request.

In the example we'll build, we'll start with an "HTTP Request" node to download the n8n logo from its GitHub repository. This will output the logo as binary data. We will then connect this to the "Extract From File" node, choosing the "Move File to Base64 String" operation, which will perform the Base64 encoding. To see this in action, check out the following screenshots:

The n8n workflow showing an HTTP Request node configured to download the n8n logo from GitHub.

The Extract From File node with the "Move File to Base64 String" operation selected, converting the binary data into a Base64 encoded string.

You can inspect the results of this process by checking the output of each node. The "HTTP Request" node will provide the logo as binary data, while the "Extract From File" node will show the same data, but now as a Base64 encoded string, ready to be used in your workflow:

Output of the HTTP Request node, showing the n8n logo as raw binary data.

The output of the Extract From File node, showing the binary data successfully converted into a Base64 encoded string.

Okay, this works as expected, but what do we do if we have multiple binary items to encode? Let's check the same node we used above to handle this more complex scenario.

To simulate a situation with multiple files, we'll modify our workflow. We will change the "HTTP Request" node to download a ZIP file from the n8n demo website repository. Then, we will insert a "Compression" node to unzip the contents, which will give us a collection of binary files to work with:

The updated HTTP Request node configured to download a ZIP file from the n8n demo website repository.

The n8n workflow showing a Compression node configured to unzip the downloaded file, resulting in a collection of multiple binary files.

Now, let's return to our "Extract from File" node and inspect its parameters. When we examine the node's settings, we can see that it only has a field for a single binary file. Let's try to run our workflow with the multiple files to see what happens. The result is likely not what we expected, as only one of the binary files is converted to Base64, while the others are left unchanged. This shows that the node isn't designed to handle multiple binary items at once:

The Extract From File node attempting to process multiple binary files, with its limitations for this task clearly visible in the single input field.

The output of the Compression node, showing multiple binary files from the unzipped archive.

Output of the Extract From File node, showing that only one of the binary files was converted to a Base64 string, while the others remain unchanged.

Given this limitation, what's a good alternative? Let's search for a pure code solution. A great option is to use the Node.js built-in Buffer class. You can read more about it in the official Buffer documentation. The Buffer object has a toString() method where you can specify an encoding option, and 'base64' is one of the supported formats. This seems like a perfect way to handle our multiple binary items. It's time to put this into practice and see how it works!

After a few minutes of work, the solution is ready to be implemented. Let's check out the code:

/**
 * Encodes multiple binary files from an n8n input item into Base64 strings.
 *
 * This code assumes it is running in an n8n "Code" or "Function" node
 * where 'this' refers to the node's context and 'helpers' are available.
 *
 * @returns {object} An object containing an array of file objects,
 * each with a 'path' and 'data' (Base64 string).
 */
const results = [];

// The 'async' keyword is required for the outer function to use 'await'.
// It is assumed the surrounding function is already declared as 'async'.
for (const file in $input.first().binary) {
  try {
    // Retrieve the binary object for the current file key.
    const bin = $input.first().binary[file];

    // Use n8n's helper function to get the file buffer.
    const binBuffer = await this.helpers.getBinaryDataBuffer(0, file);

    // Construct the file path, handling cases with or without a directory.
    const path = bin.directory
      ? `${bin.directory}/${bin.fileName}`
      : bin.fileName;

    // Push a new object to the results array.
    results.push({
      path: path,
      data: Buffer.from(binBuffer).toString('base64'),
    });
  } catch (error) {
    // Log any errors that occur during processing.
    console.error(`Error processing file "${file}": ${error.message}`);
    // You could also choose to throw the error or handle it differently here.
  }
}

// Return the final object in the expected format for the next node.
return { files: results };

Now, let's try this code in a "Code" node and check the results:

The final output of the Code node, showing an array of objects. Each object contains a path and a data attribute, with the data attribute holding the correctly Base64-encoded content for each file.

We are happy to see all of our binary files correctly converted into an array of objects, with path and data attributes, and the data correctly encoded to Base64. This solution works perfectly and is a simple, effective way to handle multiple binary files in your workflow.

The link to the template from this blog post can be found here.

Sunday, July 20, 2025

How to Prevent Concurrent n8n Workflows with a Simple Trick

As you start building more complex and dynamic automations, you'll inevitably run into a common challenge: managing multiple workflow executions. If you've been following the tips in my previous posts, you know how powerful it can be to use Google Sheets as a control panel for your workflows. You have a central sheet with many rows, and your workflow needs to process each row, one by one.

This approach is incredibly flexible, but it introduces a new problem. To keep your automation responsive, you've likely set up a Scheduled Trigger to check for new data every few minutes. But what happens if one execution takes longer than expected? The next scheduled execution will start while the first is still running, leading to concurrent executions that can cause a real mess. They'll both try to process the same rows, potentially leading to errors, duplicate work, or lost data.

Let's break down this common scenario and the simple, elegant fix.

The Workflow Setup: A Quick Recap

Imagine your workflow is designed to manage a large, dynamic list of tasks in a Google Sheet. It's set up to do the following:

  1. A Schedule Trigger node runs the workflow every 5 minutes.
  2. A Google Sheets node reads a list of rows that need to be processed.
  3. The workflow then uses a Loop to process each row individually.
  4. After processing each row, it updates the Google Sheet with the new status.

This design is great because it ensures that even if an execution fails halfway through, the processed rows are already marked as complete. The next execution will simply pick up where the last one left off.

A screenshot of an n8n workflow. It begins with a 'Schedule Trigger' node, which connects to a 'Google Sheets' node to read data. This is followed by a 'Wait' node for demonstration purposes, and then a loop that processes the data and updates the Google Sheet.

The Pitfall: When Workflows Collide

The issue arises when a single execution takes longer than the scheduled interval. For instance, if one workflow execution processes 50 rows, and each row takes a few seconds to handle, the total runtime could be over 5 minutes.

In this situation, the schedule trigger will fire again at the 5-minute mark, launching a new, completely separate execution of the same workflow. Both executions are now running simultaneously, which is exactly what we want to avoid. You might see a duplicate of the same job in your n8n Executions list, both doing the same work.

A screenshot of the n8n execution list showing two concurrent executions of the same workflow.

The Solution: A Simple Timeout Setting

The key to solving this problem lies in a powerful, often overlooked feature of n8n workflows: the Workflow Timeout. This setting defines the maximum amount of time a workflow is allowed to run before it is automatically stopped.

The trick is to simply set the workflow timeout to the exact same value as your scheduling interval.

For our example, since the schedule trigger is set to 5 minutes, you would set the workflow timeout to 5 minutes as well. This creates a perfect sync. If a workflow runs for the entire 5-minute duration, n8n will kill it. With this setting, if an execution is still running when the next schedule is due, n8n's scheduler will not launch a new one until the active execution completes or hits its timeout.

This simple change acts as a guardrail, ensuring that only one instance of your workflow is ever active at any given time.

A screenshot of the workflow settings in n8n, showing the 'Workflow Timeout' option set to 5 minutes.

The Result: Smooth, Sequential Processing

Once you've made this change, your workflow will behave exactly as you intended. Let's look at the result:

  • The schedule trigger fires at 10:00 AM.
  • The workflow starts and processes rows for 3 minutes.
  • The workflow completes at 10:03 AM.
  • At 10:05 AM, the schedule trigger fires again. A new execution starts, picking up the next set of rows.

Even in a case where the first execution takes longer, the next execution will only start after the first one has finished. This ensures that your processing is always sequential and reliable, preventing any overlap, data conflicts, or redundant work.

A screenshot of the n8n execution list showing one workflow execution cancelled due to timeout and a new one started after it.

This powerful but simple technique is an essential building block for creating robust, production-ready n8n automations. You can find the template for the described approach here.

Saturday, July 12, 2025

Handling Google Sheets API Rate Limits in n8n

When building an n8n workflow, it's a common and effective practice to use a Google Sheet as a user interface or a database. This approach works perfectly for small batches of data. But what happens when your data volume scales up, and your workflow starts failing with a cryptic error message like "The service is receiving too many requests from you"? This error is your friendly reminder that you've hit the Google Sheets API's rate limit.

Let's break down what's happening and how to fix it with a simple, yet powerful, strategy.

The Problem: When a Simple Workflow Fails

Imagine a simple n8n workflow designed to process a large number of items and update a Google Sheet. You've likely configured it to do something like this:

  1. Read Data: A node (like a Google Sheets node or an external data source) reads a large list of items.
  2. Process Data: A series of nodes performs some logic on each item.
  3. Update Google Sheet: A final Google Sheets node is placed inside a loop to update each row of your spreadsheet individually.
A typical n8n workflow showing a "Google Sheets" node connected to a series of nodes that perform a function, which then loop back to another "Google Sheets" node to update the data.

Adding more data to Google Sheet to test n8n workflow

The problem is that even when a Google Sheets node is configured to update one row at a time within a loop, n8n can process data incredibly fast. This can cause the workflow to send a barrage of requests to the Google Sheets API in rapid succession, exceeding the number of requests the API allows per minute. The API, in turn, blocks further requests from your account, causing your workflow to fail.

An n8n workflow execution log showing a failure with a "Google Sheets" node due to a quota exceeding error.

The Solution: The Wait Node and Retry/Delay

The best way to handle this is to implement a retry/delay strategy. This approach tells your workflow to "take a break" whenever it encounters a rate limit error, giving the Google Sheets API time to reset before trying again. The key to this strategy is the Wait node.

Here’s how you can modify your failing workflow to make it resilient to rate limits. The core idea is to route any failed requests to a Wait node, which then holds the execution for a set period before retrying.

Step 1: Identify the Nodes That Might Fail

The most likely culprit for this error is the Google Sheets node itself. In a typical workflow, this is the node responsible for writing data. You will have a Google Sheets node configured to "Update a Row" for each item in your data set.

Step 2: Add a Wait Node

Drag a Wait node from the node panel onto your canvas. This node is a simple but powerful tool for controlling the flow of your workflow.

Step 3: Connect the Error Output

The crucial step is to connect the error output of your Google Sheets node to the input of the Wait node. When the Google Sheets node fails due to a rate limit, the data will be routed along this red error path instead of the regular success path.

Step 4: Configure the Wait Node

In the Wait node's settings, you'll need to specify how long the workflow should pause. A good starting point for the Google Sheets API is a minute, but you can experiment with shorter or longer delays based on your specific needs. Set the Wait Amount to 1 and the Unit to Minutes.

An n8n Wait node configuration showing a wait amount of 1 and a unit of Minutes.

Step 5: Connect the Wait Node Back to the Google Sheets Node

Now, connect the output of the Wait node back to the input of the Google Sheets node that originally failed. This creates a loop:

  1. The workflow attempts to update the sheet.
  2. If it fails with a rate limit error, it gets sent to the Wait node.
  3. The workflow pauses for one minute.
  4. After the wait, the data is sent back to the Google Sheets node, and the update is retried.

This "self-healing" loop continues until all items are processed successfully.

The Final Workflow

After implementing this change, your final workflow should look something like this:

The updated n8n workflow with a Wait node connected to the error output of the Google Sheets node, creating a retry loop to prevent failure due to rate limits.

With this setup, the workflow will no longer fail outright. Instead, it will gracefully handle the rate limit, wait for a brief period, and then continue processing your data. If you check the execution log, you'll see the Wait node executed only once when the rate limit was hit, proving that the delay strategy works exactly as intended.

A successful n8n workflow execution log showing that the Wait node was executed once, and the entire workflow passed successfully.

The Google Sheet for this tip is available here as well as the n8n template.

Sunday, July 6, 2025

How to Handle Nested Loops in n8n with Sub-workflows

As you dive deeper into n8n, you'll quickly realize the power of nodes like Loop for processing lists of data. But what happens when you need to handle more complex scenarios, like iterating over one list of data for every single item in another? This is the realm of nested loops — a fundamental concept in programming that requires a clever workaround in n8n.

In this guide, we'll walk through a common problem with nested loops in n8n and reveal an elegant solution using sub-workflows.

The Initial Approach: A Tale of Two Loops

Let's imagine a scenario where you have two distinct lists of data you need to combine: a list of colors and a list of numbers. Your goal is to create a new item for every possible combination of a color and a number.

A natural first instinct is to build a simple nested loop structure. You might start by creating a list of colors, perhaps in a Code node:

A screenshot of an n8n Code node, configured to return an array of three colors: yellow, blue and green.

Then, you would use a Loop node to iterate through each color.

A screenshot of an n8n Loop node configured to iterate over the list of colors from the previous node.

After executing the workflow, we can see all three colors appear in the Done Branch, which is exactly what we would expect at this stage.

A screenshot of the n8n Loop node's Done Branch showing three output items, each representing one of the colors (yellow, blue, and green).

Inside that loop, you'd add another Code node with your numbers:

A screenshot of an n8n Code node, configured to return a list of three numbers: 1, 2, and 3.

Finally, you would add a nested Loop node to process the numbers, and a Set node at the very end to combine the data and see the results.

A screenshot of the n8n Set node's parameters, showing expressions to combine the color and number values.

Your workflow might look something like this:

A diagram of an n8n workflow with a nested loop structure. The outer loop iterates over colors, and an inner loop iterates over numbers.

Now let's execute the workflow and check the results. You will find something unexpected. The outer loop's Done Branch might show a single item, or a number of items that don't make sense, and the values are completely wrong.

A screenshot of the output of the nested loop workflow, showing an unexpected number of items (18) and incorrect data in the outer loop's Done Branch.

This happens because n8n's Loop node doesn't operate like a traditional programming loop in this nested context. The inner loop executes only once on the initial data, rather than being "reset" for each item of the outer loop. This is a common point of confusion for new n8n users.

The Solution: Harnessing the Power of Sub-workflows

Fortunately, there is a simple and effective workaround that allows you to achieve the desired nested loop functionality: using an Execute Sub-workflow node.

A sub-workflow is a separate, self-contained workflow that can be triggered by another workflow. This is the key to our solution, because it allows you to create a new "execution context" for each item in the outer loop.

Step 1: Create the Inner Loop as a Sub-workflow

First, let's build the inner loop logic in its own separate workflow. This new workflow will handle the iteration over the numbers. It will need a starting point to receive the data from the main workflow, which we'll handle with an Execute Sub-workflow Trigger node.

  1. Create a new workflow.
  2. Add an Execute Sub-workflow Trigger node. Give it a descriptive parameter name, like color, which will hold the color from the outer loop.
  3. Add the Code node with your list of numbers, just as you did before.
  4. Add a Loop node to iterate over the numbers.
  5. Inside the Loop, add a Set node. Here, you'll set the final output, combining the color (from the input trigger) and the number (from the current loop item).

A screenshot of the Execute Sub-workflow Trigger node's parameters, showing the 'color' parameter being defined.

A screenshot of the Set node's parameters, showing the expressions being used to combine the color and number values.

The final sub-workflow should look like this:

A screenshot of the final sub-workflow, showing the Execute Sub-workflow Trigger, Code, Loop, and Set nodes connected in a sequence.

Step 2: Call the Sub-workflow from Your Main Workflow

Now, let's go back to your main workflow.

  1. Remove the nested Loop node you had before.
  2. In its place, add an Execute Sub-workflow node.
  3. Configure the Execute Sub-workflow node to call the sub-workflow you just created.
  4. Most importantly, you need to pass the current item from the main workflow's loop into the sub-workflow. To do this, link the output of your first Loop node to the Execute Sub-workflow node. You'll pass the color value into the color input parameter you defined in the sub-workflow.

A screenshot of the Execute Sub-workflow node's parameters, showing how the 'color' value from the main workflow is passed to the sub-workflow.

Your main workflow will now be much cleaner:

A screenshot of the main workflow, showing the starting nodes connected to the Execute Sub-workflow node.

Step 3: Run and Verify

Now, when you execute the main workflow, you will see the results you were hoping for! For each color in your first list, the sub-workflow will be called, and it will iterate through all the numbers, creating 9 unique items (3 colors x 3 numbers).

A screenshot showing the final output of the sub-workflow solution, with 9 unique items created from the combinations of colors and numbers.

The Takeaway

While n8n doesn't support nested loops in the traditional sense, using a sub-workflow is a powerful and reliable pattern. It allows you to create a modular, clean, and repeatable process for handling nested data structures. As you build more complex workflows, remember this trick—it's an essential tool for an n8n learner!

Sunday, June 29, 2025

Mastering Custom Retry and Delay Logic in n8n

One of the most common challenges when building a workflow is dealing with external APIs. Services often have rate limits, temporary outages, or other issues that can cause your workflow to fail. While n8n offers a built-in retry mechanism on most nodes, its options are limited. This guide will walk you through how to build a robust, custom retry and delay system that gives you complete control over your workflow's behavior.

The Limitations of n8n's Default Retry Settings

Most nodes in n8n have a "Settings" tab where you can configure basic retry behavior. You can specify a maximum number of tries and a fixed delay between them.

The default retry settings in an n8n node's "Settings" tab, showing options for "Maximum Tries" and "Delay Between Tries."

However, the maximum delay is capped at 5,000 milliseconds (5 seconds), and the maximum number of tries is limited to 5. This simply isn't enough for many real-world scenarios. For example, if an API has a strict 60-second throttling period, a 5-second delay won't help you at all. To handle these situations, you need to build your own logic.

Building a Custom Retry Loop: Step-by-Step

Our custom solution will use a few core n8n nodes to create a flexible loop that retries a failed node with a custom delay and for a custom number of times. Let's build it together.

Step 1: Initialize Your Retry Variables with a Set Node

First, we need to define the rules for our retries. We'll use a Set node to create and store these variables at the beginning of the workflow.

  • Node Name: Set Fields
  • Key: max_tries
  • Value: A number representing the total number of attempts you want to make (e.g., 6).
  • Key: delay_seconds
  • Value: A number representing the initial delay in seconds (e.g., 30 for a 30-second delay).

This initial Set node acts as the control panel for our retry logic, making it easy to change the settings later without modifying the rest of the workflow.

The Set Fields node configured with two keys: max_tries and delay_seconds with their respective initial values.

Step 2: Configure Your Target Node to Handle Errors

Next, add the node that might fail—in this example, an HTTP Request node. This could be any node that calls an external service. The critical part here is to tell the node what to do when it encounters an error.

An HTTP Request node, which is the target node for the custom retry logic.

  • In the Settings tab of your HTTP Request node, find the On Error dropdown.
  • Change the setting to Continue (using error output).

The "Settings" tab of an HTTP Request node with the "On Error" dropdown set to Continue (using error output).

This setting is essential. Instead of stopping the workflow, it will send the failed data to a new output branch, allowing us to process the error and decide whether to retry.

Step 3: Decrement the Retry Counter

When the target node fails, the data will flow to its "Error" output. We will use the max_tries variable as our counter and decrement it with each failed attempt. We also need to keep the delay_seconds variable in our data to be used later in the loop.

  • Connect another Set node to the Error output of your target node.
  • Node Name: Edit Fields
  • Key: delay_seconds
  • Value: {{$json.delay_seconds}}. This ensures the delay value is carried through the loop without being changed.
  • Key: max_tries
  • Value: Set this value using the expression: {{$json.max_tries - 1}}. This expression simply subtracts 1 from the current value of max_tries with each iteration, tracking our remaining attempts.
The Edit Fields node showing the expression {{$json.max_tries - 1}} for the max_tries key and the expression {{$json.delay_seconds}} for the delay_seconds key.

Step 4: Check if Tries are Remaining with an If Node

Now that we've decremented the counter, we need to check if we should continue trying.

  • Connect an If node to the Edit Fields node.
  • Condition: left value should be {{$json.max_tries}}, Operation should be Is Less Than or Equal, and right value should be 0.

The If node configured to check the condition where the max_tries value is less than or equal to 0.

This If node will have two outputs:

  • True: This path is followed if max_tries is 0 or less. This means we are out of tries, and the workflow should handle the final failure.
  • False: This path is followed if max_tries is greater than 0. This means we still have attempts left.

Step 5: The Delay and Loop Back

This is where the magic happens! We'll use a Wait node to pause the workflow before trying again.

  • Connect a Wait node to the False output of the If node (the path where we still have attempts remaining).
  • Wait Amount: Set this to {{$json.delay_seconds}}. This is the dynamic value we defined in our first Set node.
  • Wait Unit: Seconds

Finally, connect the output of this Wait node back to the input of your target HTTP Request node. This creates a loop. When a failure occurs and we have retries left, the workflow waits for the specified delay and then re-executes the target node.

A screenshot of the complete n8n workflow, including the loop connecting the Wait node back to the target HTTP Request node.

The Wait node with "Wait Amount" set to {{$json.delay_seconds}} and "Wait Unit" set to "Seconds."

Advanced Logic: Exponential Backoff

The basic loop uses a fixed delay, but you can make it even smarter. A powerful technique called exponential backoff can prevent you from overloading a server by increasing the delay with each failed attempt.

To implement this, simply modify your Edit Fields node in Step 3. The delay_seconds key's value would be updated like this:

  • Key: delay_seconds
  • Value: {{$json.delay_seconds * 2}}

Now, each time the loop runs, the delay will double. This makes your workflow more polite to the external API and gives the service more time to recover.

What Happens on Success and Final Failure?

  • Success: When the target node finally succeeds, it will no longer send data to the "Error" output. Instead, the data will continue down the "Success" output branch, and the retry loop will be bypassed entirely.
  • Final Failure: If all tries are exhausted, the If node will send the data down its True output (where max_tries is not greater than 0). You can connect a Stop And Error node here to halt the workflow or use a notification node to send an email or a Slack message, alerting you that the task ultimately failed.

By building this custom loop, you gain the flexibility to handle a wide range of API and service issues, making your workflows more resilient and reliable.

Let's execute the workflow to check this approach.

A screenshot of the n8n workflow's execution history, displaying that the target node was run multiple times as defined by the custom retry logic.

You'll see that the target node is called exactly the number of times specified in the initial Set node, confirming the reliability and control of this custom retry mechanism.

An n8n template for this approach can be found here.

Saturday, June 21, 2025

Elevate Your n8n Workflows: Google Sheets as Your Intuitive UI 🚀

As you dive deeper into automating with n8n, you'll often find yourself needing more than just simple data processing. Imagine a scenario where you want to:

  • Store input data in a structured, tabular format that's easy to review and manage.
  • Execute your n8n workflow only on specific, relevant parts of your dataset.
  • Capture new fields generated by your n8n workflow's output, neatly organized.

In essence, you're looking for a user interface (UI) for your n8n workflows. While dedicated UIs can be complex to build, Google Sheets offers a remarkably popular, accessible, and straightforward solution. Its seamless integration with n8n makes it an ideal candidate for managing your workflow's inputs and outputs.

Let's walk through a practical example to demonstrate how you can leverage Google Sheets as an effective UI for your n8n automations.

Setting Up Your Google Sheet: The Foundation of Your UI

First things first, we need a Google Sheet to serve as our interface. We'll create a simple sheet with three key fields: Color, Status, and Number.

  • Color: This column will hold our primary input data – in this case, just a color name (e.g., "Red", "Blue", "Green").
  • Status: This is a crucial control field. It will indicate whether a row is READY for processing by our n8n workflow or if it has already been DONE. This allows us to selectively process data.
  • Number: This column will be our output field. We'll use n8n to calculate the string length of the Color name and populate this column.

A screenshot of a Google Sheet with columns for Color, Status, and Number, and sample data populated.

By using these fields, Color acts as our input, Number as our output, and Status provides the necessary control for sequential processing and clear visibility into what's been completed.

Building Your n8n Workflow: Step-by-Step Automation

Now, let's switch over to n8n and configure our workflow to interact with this Google Sheet.

Step 1: Reading Data with the Google Sheets NodeThe

Our first step in n8n is to read the data from our Google Sheet. We'll use the Google Sheets node for this. Configure it to read all rows where the Status column is set to READY. This ensures that our workflow only processes new or unhandled items.

A screenshot of the n8n workflow with the Google Sheets node configured to read rows, filtered by Status equals READY.

Once you execute this node, you'll see your tabular data from the Google Sheet, specifically filtered to show only the rows marked READY. This immediate feedback confirms your connection and filter are working correctly.

The output of the Google Sheets node in n8n, showing only the data rows where Status is READY.

Step 2: Processing Data with Loop and Set Nodes

Next, we'll add a Loop node followed by a Set node within that loop. The Loop node is essential because it allows us to process each READY row individually.

A screenshot of the n8n workflow showing a Loop node and a Set node nested inside it.

Inside the Set node, we'll perform two important actions:

  1. Pass row_number to output: It's vital to retain the original row_number from the Google Sheet. This number will be used later to update the correct row.

  2. Calculate and return the length of the Color name: This will be the value we write back into the Number column in our Google Sheet.

  3. Update Status to DONE: After processing a row, we'll change its Status to DONE. This prevents the same row from being processed again in subsequent workflow executions.

A screenshot of the n8n Set node's configuration, showing the expression to get the row_number and the expression to calculate the length of the Color string, along with the Status value being set to DONE.

Step 3: Updating Rows with the Google Sheets Node (Again!)

Finally, we'll add another Google Sheets node, but this time, its purpose is to update the rows we've just processed.

A screenshot of the n8n workflow with a second Google Sheets node added, configured to perform an 'Update' operation.

n8n is quite smart here! It will automatically map the columns from your workflow's output to the corresponding columns in your Google Sheet. Crucially, it will use the row_number we passed through the Set node to match and update the correct row in your sheet.

A screenshot of the second Google Sheets node's update configuration, highlighting how it uses row_number to match and update the correct row.

Executing and Verifying Your Workflow

With all nodes configured, it's time to execute our n8n workflow and observe the magic!

The n8n output should look exactly as expected, showing the Number (length of the color name) and the Status updated to DONE for each processed row.

A screenshot of the final execution output in n8n, showing the processed data with the Number field populated and the Status updated to DONE.

Now, let's hop back to our Google Sheet itself to confirm the changes.

A screenshot of the Google Sheet, showing that the Number and Status columns have been updated by the n8n workflow.

Fantastic! The Google Sheet has been updated as well. The Number column now contains the calculated lengths, and the Status for those rows is DONE.

Controlling Your Workflow's Input

Here's the real power of this setup: If you execute your n8n workflow again right now, nothing will happen. Why? Because there are no more rows with the Status set to READY! This simple yet effective mechanism allows you to fully control which input values your n8n workflow processes. You can add new READY rows whenever you need to trigger the workflow for new data.

This approach not only provides a clear way to feed input values to your n8n workflow but also gives you immediate visibility into the output values and the processing status of each item.

You can access the Google Sheet used in this example here and find the n8n template here to get started quickly!