4. Node-RED¶
4.1. What is Node-RED?¶
4.1.1. What is Node-RED?¶
Node-RED is a programming tool used for linking all our tools together (OpenRPA and OpenFlow) as well as hardware devices, APIs and even online services in a new and interesting way. Think of it as a “backend” process flow designer & integrator. The communication between Node-RED and OON stack is done through MQTT protocol (powered by RabbitMQ).
It provides an in-browser editor where you can connect flows using any nodes available. Each node represents a step that when wired together forms a meaningful task. It also follows a common pattern: input, processing and output. It is important to note that Node-RED functions like a middleware to an information processing system. It simply connects the inputs to the workflows and allows them to process it.
4.1.1.1. What is MQTT?¶
MQTT stands for Message Queuing Telemetry Transport. It is a publish/subscribe, extremely simple and lightweight messaging protocol, designed for constrained devices with low-bandwidth, high-latency or unreliable networks. The design principle is to minimise network bandwidth and device resource requirements while attempting to ensure reliability and assurance of delivery. These principles in turn make the protocol ideal for the emerging “machine-to-machine” (M2M) or “Internet of Things” world of connected devices, and for mobile applications where bandwidth and battery power are limited.
4.2. Accessing Node-RED for the first time¶
To access Node-RED, simply use a browser and go to your Node-RED environment URL. If running local, default is http://localhost.openiap.io:1880. If you don’t own an OpenFlow/Node-RED of your own, feel free to create a free (and temporary) Node-RED demo using OpenIAP platform.
4.2.1. Quickstart Running Node-RED Demo Instance¶
Here the users will learn how to start a Node-RED Demo Instance.
4.2.1.1. Creating a demo Node-RED instance using OpenIAP platform¶
Go to OpenIAP demo enviroment, which can be found at Demo OpenIAP (https://app.openiap.io
).
Access it using your OpenFlow credentials. If this is your very fist login any credentials will do, as you user will be then created with the same credentials you have provided. Please take notes of your user an password!
Note
Your NodeRED free demo enviroment URL is created based off your OpenFlow username, hence please do not create an username with improper characters for a URL (such as _, #, @, $, etc), otherwise it will not work.
Once there, look for the NodeRED tab on the upper menu and click it. Then, proceed
by clicking the button Created NodeRED
.

If everything goes right, a similar output as from the next image will be shown.
Your demo enviroment is ready to use and is accessible at https://{YOUR-OPENFLOW-USERNAME}.app.openiap.io/
.
In the image example next, the username was doc-johnny
.

4.2.1.2. Logging in to Node-RED¶
Access the desired Node-RED URL. Once there, a button is shown with
the text Sign in with SAML
.
When you click the button, Node-RED attempts to gather your currently (or cached)
logged in OpenFlow authentication data and then logs in, to
Node-RED, with the same user.

Node-RED Sign In page.¶
4.2.2. Quickstart Running OpenFlow Node-RED using NPM¶
Please use guide provided at Install guides Using docker is the preferred and recommended way Install using docker
4.3. Node-RED Editor¶
The editor window is where all the work gets done. It contains four components: header, palette, workspace and sidebar.

Node-RED Components.¶
4.3.1. Header¶
The header contains the deploy button, main menu, and, if user authentication is activated, the user menu.
4.3.1.1. Deploy Button¶
The deploy button is used to deploy flows once you have finished creating or editing them. It is important to remark that you must always deploy a flow after editing it so the changes are applied.
4.3.2. Palette¶
The palette contains all of the nodes that are installed and available to use. These nodes are organized into a number of categories, which can be expanded or collapsed by clicking its header.
The entire palette can be hidden by clicking the toggle button that is shown when the mouse is over it or by pressing Ctrl+p.¹
¹ - Palette (https://nodered.org/docs/user-guide/editor/palette/)

Node-RED Editor Palette.¶
4.3.3. Workspace¶
The main workspace is where flows are developed by dragging nodes from the palette and wiring them together.
The workspace has a row of tabs along the top; one for each flow and any subflows that have been opened. ²

Node-RED Editor Workspace.¶
4.3.3.1. View Tools¶
The footer of the workspace contains buttons to zoom in and out as well as to reset the default zoom level. It also contains a toggle button for the view navigator.
To zoom in, either click the + button inside the view navigator or press Ctrl+=.
To zoom out, either click the - button inside the view navigator or press Ctrl+-.
To reset the zoom, either click the O button inside the view navigator or press Ctrl+0.
The view navigator provides a scaled down view of the entire workspace, highlighting the area currently visible. That area can be dragged around the navigator to quickly jump to other parts of the workspace. It is also useful for finding nodes that have been lost to the further edges of the workspace.

Node-RED Editor Workspace Navigator Active.¶
4.3.3.2. Customising the view¶
The workspace view can be customised via the View tab of the User Settings dialog.
To activate the User Settings dialog, press Ctrl+,.

Node-RED Editor User Settings Dialog.¶
² - Workspace (https://nodered.org/docs/user-guide/editor/workspace/)
4.3.4. Sidebar¶
The sidebar contains panels that provide a number of useful tools within the editor.³
-
Information
View information about nodes and their help info
-
Debug
View messages passed into debug nodes
-
Configuration Nodes
Manage configuration nodes
-
Context Data
View the contents of the context variables

Node-RED Editor Sidebar.¶
Some nodes contribute their own sidebar panels, such as node-red-dashboard (https://flows.nodered.org/node/node-red-dashboard).
The panels are opened by clicking their icon in the header of the sidebar, or by selecting them in the drop-down list shown.
The sidebar can be resized by dragging tis edge across the workspace.
If the edge is dragged close to the right-hand edge, the sidebar will be hidden. It can be shown again by selecting the Show sidebar option in the View menu, or using the Ctrl+Space shortcut.
4.4. Flow, Subflows, Nodes and Messages¶
This section is mostly based on Node-RED
’s
documentation. Please refer to https://nodered.org/docs/user-guide/editor
for further details.
A Flow
is a
working space where Nodes
are
organized. Each Flow
is
represented by a tab with its name; a description is provided in the Information
Sidebar. All the Nodes
within a
Flow
share the
same Flowscope Context, that is, they all have access to the same context values.
New Flows
are
created by clicking the “plus” button in the top bar. To edit a Flow
’s properties,
users can double-click on its tab in the top bar. Flows
can be
disabled by clicking the toggle button at the bottom of the dialog box.
Subflows
are
collections of Nodes
grouped
within a single Node
. The purpose
is to reduce the visual complexity of the Flow
or to reuse
these collections. The created Subflows
become a
new Node
in the
Pallete.
To create a blank Subflow
, the user
has to click on ‘Subflow -> Create subflow’ option on the menu. Another option is to
create a new Subflow
out of
existing Nodes
.
The user must select the Nodes
to be
converted and click on ‘Subflow -> Selection to Subflow’ option on the menu.
Nodes
are the
visual representation of actions and can be chained together via wires, thus creating a
flow. The wires connect to the node
’s ports, that
work as doors. Each node
can have at
most one input port and several output ports.
Some icons must be considered here. When there is a blue circle above a node
, it means
that it has undeployed changes. If there are any errors within a node
, a red
triangle will appear above it. When a node
has an icon
and a status message below it, that shows the runtime status of the node
.
By double-clicking on a node
, the user
will have access to its Properties, Description and Appearance
- each one will be a tab in the dialog box. Users may also disable a node
by clicking
the button at the bottom of the dialog box.
Messages
are
JavaScript objects and can have any set of properties. Messages
are the
units that pass between Nodes
when a Flow
is working.
The most used property of a Message
is its
payload - represented by msg.payload
inside
the workspace. This is the default property to pass data between Nodes
, but the
user is free to create their properties that most fit their needs. All JavaScript value
types are accepted.
When working with JSON strings, it must be first converted into a JavaScript object before its content can be accessed.
For further details on Messages
, please
refer to Node_RED’s page on the topic: https://nodered.org/docs/user-guide/messages.
4.5. OpenFlow and OpenRPA Nodes¶
The following nodes are dedicated to the integration of Node-RED to OpenFlow and OpenRPA.
4.5.1. RPA Detector¶
This node is responsible for invoking a Detector
previously created in OpenRPA. Once deployed, the RPA Detector
will be active even if the Node-RED editor is closed (since the server
is still running). Connect the output end of this node to an RPA
Workflow node to invoke a workflow upon triggering the detector.
Properties:
Detector
-
the OpenRPA Detector that will be deployed in this flow
. A list of
all available Detectors
will
be presented to the user.
Name
- Display name
of the node
.
4.5.2. RPA Workflow¶
This node is responsible for invoking an OpenRPA workflow remotely.
There are three output ports for this node
. The first
one is named completed and outputs the message from the
OpenRPA robot if its execution succeeded. The second one is named
status and outputs the status of the robot while executing. Finally, the third
one is named failed and outputs the error message returned by the robot in
case its execution failed.
Properties:
Robot
- The
robot to which the Workflow
belongs. It corresponds to OpenRPA’s Projects
.
Workflow
-
The name of the Worflow
to be
invoked. A list will be presented with all the available workflows
.
Local queue name
- Name of the queue that will be created inside RabbitMQ for “robot
agents” to consume from.
Name
- Display name
of the node
.
4.5.3. SMTPServer In¶
Coming soon - Work in progress
4.5.4. Workflow In¶
This node creates a new Workflow
in
OpenFlow - visible in the Workflows page (http://demo.openiap.io/#/Workflows
)
or in the “Workflows” page set in your OpenFlow instance (usually
/#/Workflows
).
The Workflow
created can have an execution chain that starts with this node. By wiring an
RPA Workflow node to this one, it is possible to execute RPA Workflows
.
The workflow can also be invoked by clicking the “Invoke” button inside the “Workflows”
page or by creating an instance of it by using the Assign
in
Node-RED node or the Assign OpenFlow activity inside
OpenRPA.
It is important to notice that a Workflow Out
node must also always be added to the end of the execution flow started by a Workflow In
node.
By deploying a flow containing this node, a role will be created containing the Queue name
appended by users
. If the
user desires anyone else to access it, the user must be added in the Roles page.
The user can also create a form to be used with this execution flow by using the Workflow Out
node to define it. If the user does not know what a Form is, please refer to the forms section.
It also
Properties:
Queue name
- Name of the workflow
when
accessed via OpenFlow.
RPA
- This
option allows the workflow
to be
invoked by an OpenRPA robot or not.
WEB
- This
option allows the workflow
to be
invoked via the web (that is, via OpenFlow server) or not.
Name
- Display name
of the node
.
4.5.5. Workflow Out¶
This node
represents the output of a Workflow
created with the Workflow In
node. It also allows for the user to define an OpenFlow Form - more on
that in the forms section - which allows the user to insert
input data and can be chained to other Workflows
.
By deploying a flow containing this node, a role will be created containing the Queue name
appended by users
. If the
user desires anyone else to access it, the user must be added in the Roles page.
The user can also create a form to be used with this execution flow by using the Workflow Out
node to define it. If the user does not know what a Form is, please refer to the forms section.
Properties:
State
-
There are three options here: idle, completed and failed.
Userform
-
Defines a form for gathering user input data.
Name
- Display name
of the node
.
4.6. Flow Examples¶
Coming soon - Work in progress
4.6.1. Using OpenFlow Forms¶
4.6.1.1. Create a Form in OpenFlow¶
In this section the users will learn how to create a Form in OpenFlow. If they do not know what a Form is, please refer to the forms section.
The first step is to set up a form which will be used to pass the "Hello from OpenFlow!"
or any other input inserted by the user to Node-RED.
Go to the Forms
page (default is http://demo.openiap.io/#/Forms
)
and click the Add Form
button.

Now drag a Text Field
form to the Form designer.

Change the Label
parameter to Please enter 'Hello from OpenFlow!' below
.

Click on the API
tab and
change the Property Name
parameter to hello_from_openflow
.
This is the variable which will be passed to Node-RED.

Finally, click the Save
button.

Set the Form name as hellofromopenflow
and click the Save
button
to save it.

That’s it! You have successfully configured a Form
in
OpenFlow. Proceed to the next sections to see how to configure it in Node-RED.
4.6.1.2. Configure Form in Node-RED¶
Now the users will learn how to properly configure the Form in Node-RED.
Navigate to the Node-RED instance, create a new flow and click twice in its tab to
rename it to Forms
. Then,
click in the Done
button
to save it.
Note
Proceed to Accessing Node-RED for the first
time for more information on how to set your own Node-RED
instance.

Drag a workflow in
node to the workspace. This node will be responsible for starting the execution flow
which will run the Form processing logic.

Cick twice in the workflow in
node to open its Properties tab. Set the Queue name
as openflowformsexample
and check both the RPA
and
WEB
checkboxes. RPA
is
checked to allow the Form created in the previous section to be invoked from OpenRPA
and WEB
to allow it to be invoked from OpenFlow. Also, change its name to OpenFlow Forms Workflow
.
Note
The user can also press RETURN
to
edit the node’s properties as long as the node is focused inside the workspace.

Now we’re going to configure the logic for processing the variable returned from the Form.
Drag a switch
node
to the workspace and wire it to the OpenFlow Forms Workflow
node previously set.
Click twice on the switch
node
to open its Properties tab. Set its Property
parameter as msg.payload.hello_from_openflow
.

Then, we can configure the different ports for the different input cases inside the
pattern matching box, just below the Property
parameter.
First, change the first case (==
) to is empty
.
Then, add a new case by clicking the + add
button just below the pattern matching box and set it to is null
.
Add a new case and set it to otherwise
.
Finally, click the Done
button.
This is done so when the end-users enters an empty
or
null
input this execution flow will enter an idle
state
and the Form will still be available at OpenFlow’s home page. Else, if the user
enters any input, it will be passed into Node-RED.

Now drag a workflow out
node into the workspace and wire it to the first two ports of the switch
node.
These ports correspond, respectively, to the is empty
and is null
cases set in the previous step. That is, the execution flow will end up here if the
user enters either an empty
or
null
input.

Click twice on the workflow out
node to open its Properties tab. Change its State
to
idle
.
Also change the Userform
to
hellofromopenflow
,
which is the form we have defined in the previous section. Then click the Done
button
to save the changes.

Drag another workflow out
node into the workspace and wire it to the third - or ending - port of the switch
node.
This port corresponds to the otherwise
case set up before.

Click twice on the workflow out
node to open its Properties tab. Change its State
to
completed
.
Also change the Userform
to
hellofromopenflow
.
Then click the Done
button
to save the changes.

Now drag a debug
node
to the workspace and wire it to the second workflow out
node. This node will be used so the users are able to see the message which will be
passed into the Form.

Finally, click the Deploy button to finish the Form configuration and update the current Flow. The Flow should now look like the image below.

4.6.1.3. Invoking the Form¶
In this section the users will learn how to invoke the Form just created by using Node-RED.
First, drag an inject
node
to the workspace.

Now, drag an assign
node
to the workspace and wire it to the previously set ``inject´` node.

Click twice on the assign
node
to open its Properties tab. Assign its Workflow to the openflowformsexample
workflow set up in the previous section. It is useful to remind the users here that
the Workflow set up here corresponds to the Queue name
of the Workflow defined in the workflow in
node.
Now the users can either assign the Target
which
will execute the Workflow to a Role or to a
particular user. In our case, select users
, so
any user inside the user
role
will be able to invoke it from OpenFlow.

Now, click the Deploy button once again to update the Flow.

Then click the button inside the inject
node
to assign an instance of the Workflow previously created to the role users
.
Open a new tab and navigate to OpenFlow’s home page. The instance of the Workflow we just assigned appears.

The users can now click the Open button to test the Form we have
just created. Enter Hello from OpenFlow!
in the text field and then click the Submit button. A debug message
will appear in Node-RED.

4.6.2. Dummy Integration OpenRPA-OpenFlow-NodeRED¶
In this example, users will learn how to use OpenRPA, OpenFlow and Node-RED for message passing.
4.6.3. AI Image Recognition¶
In this example, the users will learn how to create a page in Node-RED containing a Dropzone - Copyright
(c) 2012 Matias Meno (https://www.dropzonejs.com/
)
which connects to the Google Cloud Vision API (https://cloud.google.com/vision
)
to identify image contents. The final result of this example is shown below. It is
interesting to remind that the users must first have properly set up an API key, as
seen here (https://cloud.google.com/vision/docs/setup
).
Navigate to the Node-RED instance, create a new Flow and click twice in its tab to
rename it to AI Image Recognition
.
Then, click in the Done
button to
save it.

Now that the Flow is created, users can proceed to creating an HTTP endpoint which will serve the page. Remember to click the Deploy button to save the changes.
4.6.3.1. Serving the page through Templates and HTTP Endpoint¶
In this section, the users will learn how to create a page which will serve as the
entrypoint for the end-users. The end-users will be able to drop or select images
from a Windows File Dialog
in the page which will be created.
Drag an http in
node to the workspace.

Click twice on the http in
node to open its Properties tab.
The users can set its URL
to any
sub location they’d like, for simplicity purposes ours will be set as /google-vision-complex
.
This means any end-users with propers permissions will be able to access the page at
the URL
of your Node-RED instance + the sub location cited above - ie. paulo.app.openiap.io/google-vision-complex
.
Refer to Accessing Node-RED for the first time
to figure out how to set up a Node-RED instance, if they don’t know what this means.
Save this sub location somewhere because the users will set up a websocket
pointing to it in the next section!
After setting its URL
, the
users may click the Done
button
to finish the node’s configuration.

Note
The user can also press RETURN
to
edit the node’s properties as long as the node is focused inside the workspace.
Note
Proceed to Accessing Node-RED for the first
time for more information on how to set your own Node-RED
instance.
Drag a template
node to the workspace and wire it to the http in
node.

Click twice on the template
node to open its Properties tab.
First, rename the node to dropzone.js
.
Then change its Property
to
msg.dropzonejs
and its Syntax highlighting
to Javascript
.
Finally, paste the raw code contained in DropzoneJS
for AI Image Recognition (https://gist.github.com/syrilae/945838275bf729fb568d91dd63147706
)
into the Template box. This code is responsible for the logic
behind the image processing and uploading to the API when the image gets dropped
inside the Dropbox
.

Now drag another template
node into the workspace and wire it to the dropzone.js
node that was just created above. This one is responsible for styling the Dropbox
with
a custom CSS
.

Click twice on the newly added template
node to open its Properties tab.
Rename the node to css
.
Change its Property
to
msg.css
and its Syntax highlighting
to CSS
.
Now paste the raw code contained in DropzoneJS
CSS for AI Image Recognition (https://gist.github.com/syrilae/dde9fcbbdcfe6a4ff4750a2359963d7f
)
into the Template box. This code is responsible for the styling of
the Dropbox
page.

Drag another template
node into the workspace and wire it to the previously set css`
node
that was just created above. The responsibility of this node will be the HTML
code
behind the page. Basically, structuring everything that was added so far.

Click twice on this template
node to open its Properties tab.
Rename the node to html
.
Paste the code below inside the Template box.
<script>
{{{dropzonejs}}}
</script>
<style>
{{{css}}}
</style>
<script>
// "myAwesomeDropzone" is the camelized version of the HTML element's ID
Dropzone.options.myDropzone = {
paramName: "myFile", // The name that will be used to transfer the file
maxFilesize: 2, // MB
accept: function(file, done) {
if (file.name == "justinbieber.jpg") {
done("Naha, you don't.");
}
else { done(); }
}
};
</script>
<h1>Google Vision API - Upload a file here:</h1>
<body onload="wsConnect();" onunload="ws.disconnect();">
<form action="/uploadpretty" class="dropzone" method="post" enctype="multipart/form-data" id="my-dropzone">
<div class="fallback">
<input name="myFile" type="file" />
<input type="submit" value="Submit">
</div>
</form>
<font face="Arial">
<pre id="messages"></pre>
<hr/>
</font>
<img src='https://bpatechnologies.com/images/logo_footer.png'>
</body>

The first Mustache (https://mustache.github.io/mustache.5.html),
namely {{{dropzonejs}}}
is responsible for adding the execution logic of the dropzone.js
node to the page which will be passed to the endpoint created by the http in
node.
The second Mustache
is
responsible for adding the CSS styling to this same page. A pun - or easter egg - is
also added where the user will not be allowed to upload any files named justinbieber.jpg
- you can remove this part of the code if you want to.
The remaining code shows a <h1>
label containing Google Vision API - Upload a file here
and sets the logic for connection to the webservice
and fallback execution
.
Drag an http response
node into the workspace and wire it to the previously created template
node.

Finally, as a last step. The users will learn how to comment their Flow in Node-RED.
Drag a comment
node
to the workspace and insert it right above the http in
node.

Click twice on the comment
node
to open its Properties tab.
Rename the node to Dropbox
.

Finally, click the Deploy button to commit the changes.
That’s it! You have finished the first part of this Flow example. Your Flow should look like the image below.

Now that the users have finished configuring the Dropbox
page, they can proceed to implementing the processing logic responsible for
uploading the file to the Google Cloud Vision API (https://cloud.google.com/vision
).
Just follow the steps below.
4.6.3.2. Google Cloud Vision Processing logic¶
In this section, the users will learn how to implement the logic for uploading the
images which will be acquired from the end-users to the Google Cloud
Vision API (https://cloud.google.com/vision
)
and receive back the data.
First, drag an http in
node to the Flow.

Click twice on the http in
node to open its Properties tab.
Set its Method
property as POST
.
Now the user must set its URL
to /uploadpretty
,
since our form
,
defined in the html
in the
previous section, set its action
property to this sub location.
Also check the Accept file uploads?
checkbox, else we won’t be able to upload any files to the websocket
.

Drag a function
node to the workspace and wire it to the [post] /uploadpretty
node. This node will be responsible to convert the image uploaded to Base64 (https://developer.mozilla.org/en-US/docs/Glossary/Base64
)
so that we can upload it to the Google Cloud Vision API (https://cloud.google.com/vision
).

Click twice on the function
node to open its Properties tab.
Rename it to toBase64
.
Paste the following code into the Function box.
msg.payload = msg.req.files[0].buffer.toString('base64');
return msg;
The code is responsible for gathering the first image uploaded and buffering it into
an encoded base64
string.

Now the execution flow will be splitted into 3 different branches.
The first branch will be responsible to returning a status message
to the end-user’s client, indicating whether the submit
action has succeeded or not.
The second branch will format the payload, upload the image to the Google Cloud
Vision API (https://cloud.google.com/vision
)
and update the page created in the previous section. The latter part will be done by
creating a websocket out
node.
The third branch will listen on any requests passed to the endpoint and turn them into debug messages.
Note that all these branches will be executed when the end-user uploads an image to the website.
First, drag an http response
node to the workspace and wire it to the toBase64
node.

Now, drag a function
node to the workspace and wire it to the toBase64
node as well.

Rename it to format payload
.
Paste the following code into the Function box.
msg.image64 = msg.payload;
msg.payload = {
requests: [
{
image: {
content: msg.payload
},
features: [
{
maxResults: 5,
type: "LABEL_DETECTION"
}
]
}
]
}
return msg;
This code is responsible for passing the image to the msg.payload
variable and limiting the number of LABEL_DETECTION features
(https://cloud.google.com/vision/docs/labels
)
detected to 5
results.
Drag an http request
node to the workspace and wire it to the format payload
node. This node will be responsible for connecting and sending the payload to the Google Cloud
Vision API (https://cloud.google.com/vision
)
and then returning the JSON object
responsible for the data which was acquired from the API
.

Click twice on the http request
node to open its Properties tab.
Set its Method
as
POST
.
Set its URL
as https://vision.googleapis.com/v1/images:annotate?key={KEY}
,
where {KEY}
corresponds to your Google Cloud Vision API Key
(https://cloud.google.com/vision/docs/setup
).
Change Return
to
a parsed JSON object
.
Rename it as Google API
.

Drag another function
node to the workspace and wire it to the Google API
node. This node will be responsible for passing the data returned from the Google Cloud
Vision API (https://cloud.google.com/vision
)
into a return array and converting it to serialized JSON string
so it can be passed to the websocket out
node. This will, in turn, make the page upload itself automatically upon the user
uploading an image to it.

Click twice on the function
node to open its Properties tab.
Rename the node to Trim Response
.
Paste the following code into the Function box.
var retArray = []
for( var i in msg.payload.responses[0].labelAnnotations ){
let desc = msg.payload.responses[0].labelAnnotations[i].description
let score = msg.payload.responses[0].labelAnnotations[i].score
let thisObj = {
desc: desc,
score: score
}
retArray.push(thisObj)
}
msg.payload = {
result: retArray,
resultJSON: JSON.stringify(retArray, null, '\t')
}
msg.payload = msg.payload.resultJSON
return msg
Drag a websocket out
node to the workspace and wire it to the Trim Response
node. This node will be responsible for setting the event listener which will listen
to

Click twice on the websocket out
node to open its Properties tab.
Click on the Edit button, right besides the Add new websocket-listener...
.

Set the websocket
path as /ws/google-vision-complex
.
Set the Flow which will be able to use this websocket
to
the AI Image Recognition
Flow only.
Click on the Add button to save changes. Then click on the Done button to finish this node’s configuration.

Finally, add a debug
node
to the workspace and wire it to the toBase64
node.

Click twice on the debug
node
to open its Properties tab.
Set its output as complete msg object
.

Finally, click the Deploy button to commit the changes. Your workflow should look like the image below.

That’s it! This is the end of this Flow example. You can now proceed to the sub
location defined for your own Node-RED instance to test it! In my case, it is https://paulo.app.openiap.io/google-vision-complex
.
Try it out!
4.6.4. Email Receive, Send¶
There are two available nodes for working with e-mails, both are named “email”. The difference between them is the presence of output in the first and the presence of input in the second. For clarity purposes, in this example, the first node will be called “email watcher” and the second “email sender”.
In this example, a simple system will be built for receiving and redirecting e-mails according to their content.
4.6.4.1. Receive e-mails¶
The first step is to set an “email watcher”, that is, the node that will repeatedly search for new e-mails from an IMAP server and forward them as messages. This is why this node only has an output. The configurations of this node are quite intuitive and self-explanatory, but it is important to highlight some of its features.
The user will have to provide the e-mail ID and password so that the e-mail service can be accessed. It is also important to remark that the “Disposition” parameter, if set to “None”, will cause the node to constantly send messages about the same e-mails, since it searches for unread e-mails. So, it is recommended to set this parameter to “Mark Read”. For the “Port” parameter, users can use the one that is suggested (993).

It is important to remark that some e-mail services will not provide access to the
e-mail account from this type of application. So, it is necessary to enable less
secure apps to access. In this example, a Gmail account was used. To grant
permission in Gmail, the user will have to click on “Manage your Google Account” by
clicking on the avatar on the top left of the screen. A new tab will open and the
user will click on “Security”. One of the options will be to “Enable less secure
apps”. After the user has enabled this type of apps, the Email
watcher node will be able to access the selected Inbox.
4.6.4.2. Redirecting e-mails¶
A “switch” node was employed to filter the messages and multiple properties of the
messages could be set as parameters for the filtering. The most common property is
msg.payload
,
which corresponds to the body of the e-mail. It would be possible to check if it
contained certain keywords or not.
Users could also employ other properties, such as msg.date
(that returns the date and time the e-mail was sent), msg.from
(that returns the e-mail address of the sender), and others.
In this example, the msg.date
was
used to postpone redirecting e-mails sent during the weekend using a Delay
node,
for instance, to the next Monday. The “filters” used are switch
nodes
that actually perform the redirecting.

4.6.4.3. Send e-mails¶
To send e-mails, users can employ the other Email
node
(the “email sender”). This is the node with the input, that is, it will receive a
message and send it to the selected address.
It is also necessary to provide an e-mail address and password of the sender, so that the application can send the e-mail from the selected account. In this example, the account as the “e-mail watcher” was used, but another account could be used here. It means that once the new e-mail was found and was sent as message in the flow, there is no necessary relation to the original account - it could be sent via another account.

4.6.5. Creating an HTTP endpoint¶
-
API’s basic structure
-
Creating a database and adding new items to it
-
Get a full list of items
4.6.5.1. API’s basic structure¶
To create a new HTTP endpoint in Node-RED, only two nodes are
required: HTTP In
and HTTP Response
.
These two nodes must necessarily be connected, and other nodes will be added to this
structure so that the API will execute the actions properly.
The user will, then, drag these two nodes into the Workspace
and connect them. These two nodes will be required for each endpoint of the API.
Let’s assume this first endpoint will display a webpage with all available endpoints
within the domain.
To add an HTML page, the user will add a new Template
node between the two nodes already created. In this node’s Properties, the user will
have to change the field “Syntax Highlight” to HTML. After that, the users can
create their homepage for the API using HTML.
The first node (HTTP In
)
must be configured before deploying the flow. In its configurations, users will find
three fields: Method, URL and Name. For this homepage, the method used will be “GET”
and the URL will be “/homepage”.

4.6.5.2. Creating a database and adding new items to it¶
The new database will be created once the user adds new items to it. The user, then,
will create a new endpoint (that is, a HTTP In
and a HTTP Response
node connected one to the other) that can receive information and add it to a
database in MongoDB. Besides these two nodes, it will also be necessary to add a
Add
node,
from the API category in the palette.
After that, the user will have to configure the HTTP In
node. Since the required action here is to add a new item to the database, the
method of this endpoint will be “POST”. It is also necessary to set a URL. In this
example, “/newuser”.
The last step is to configure the add
node.
Users will find seven fields here: Type
, Collection
,
Entity from
,
Result to
,
Write Concern
,
Journal
,
Name
. A
full description of the nodes in this category will be provided at MongoDB Entities. For this example, it will be
necessary only to fill in the name of the collection (again, if it does not exist
yet, it will be created automatically), the correct input (‘Entity from’) and output
(‘Result to’).
To test the API, it is possible to use any API tester. The request must follow the same method as specified in the configuration of the node, that is, “POST”. The body of the request must be in JSON format, since it will be added to the MongoDB database. If the response was “200”, it means that the API is working.

4.6.5.3. Get a full list of items¶
For the last endpoint of this API, a list of all items will be retrieved. The design
of the flow will be the same, that is, one HTTP In
node, one Get
(instead
of Add
)
and one HTTP Response
node.
The method of the HTTP In
node must be set to “GET” in this case. The URL used in the example is “/listusers”.
After that, the users must configure the get
node. To
get a full list of users, the “Query” field must be left blank. If needed, users can
specify what information will be retrieved by this endpoint by setting the “Query”
field to return the desired information. After that, the user must type the name of
the collection.

4.6.6. Excel Detector¶
In this example, users will learn how to set up a detector which will automatically execute the Workflow created by us inside the Excel Read Workflow Example.
4.6.6.1. Setting up the Detector inside OpenRPA¶
To use a Detector, you must first define it inside the Detector’s
settings. The steps below show how to configure the FileWatcher Detector
to check for new Microsoft Excel files.
The FileWatcher Detector
plugin is fired when any files are added inside a given Path
. It
also allows for checking only for specific file extensions by using the
File filter
parameter. As well as checking for specific files, it allows OpenRPA to check for
file changes inside subdirectories by checking the Sub Directories
checkbox.
First, click on the Settings
tab
inside the main ribbon.

Then, click on the Detectors
icon.

Now click on the Add FileWatcherDetectorPlugin
button inside the Designer. A new detector will appear named FileWatcher
.

Finally, the users must configure the Name
of the
detector - which in our case is Excel Detector
.
Users must also configure as the Path
for
which the Detector will listen to. The users must also set a File filter
,
so the detector will only check for files with a given extension - which in our case
will be *.xlsx
.

That’s it! You have properly set the Detector inside OpenRPA. Now proceed to the next section to learn how to set the detector inside Node-RED and invoke the workflow.
4.6.6.2. Configuring Detector and Invoking Workflow inside Node-RED¶
Navigate to the Node-RED instance, create a new flow and click twice in its tab to
rename it to Excel Detector
.
Then, click in the Done
button
to save it.
Note
Proceed to Accessing Node-RED for the first
time for more information on how to set your own Node-RED
instance.

Now, drag a rpa detector
node to the workspace.

Click twice on the rpa detector
node to open its Properties tab.
Change its name to Excel Detector
and select the Excel Detector
in the Detector
dropdown. Finally, click the Done button to finish configuring the
node.

Now, drag a robot
node
to the workspace and wire it to the Excel Detector
node.

Click twice on the rpa workflow
node to open its Properties tab.
Select the OpenRPA client which will execute the Workflow in the Robot
dropdown upon the detector firing. In our case, this is the paulo
user.
Select the OpenRPA Workflow which will be executed upon the detector firing. In our case, this is the workflow created by us inside the Excel Read Workflow Example.
Finally, change the Name
to
Excel Workflow
so we can make the workspace a little more user-friendly.

Click the Deploy button to save changes.
4.6.6.3. Add Debug nodes and test the Flow¶
In this section, the user will learn how to add debug
nodes
to see the output of the flow execution.
Drag three debug
nodes
to the workspace and wire them to the Excel Workflow
node.

Now, the users can test the Flow by dropping a file inside the folder defined in the
Detector
settings tab inside OpenRPA.
After the execution of the OpenRPA workflow, users are able to see that it has executed and finished properly.

The debug output message is shown in Node-RED as well, specifying that a file was successfully detected.

This is the end of this workflow example!
4.6.7. MongoDB Entities¶
Coming soon - work in progress!
4.7. Node-RED - FAQ¶
4.7.1. Interfacing with OpenRPA¶
Here pertain questions related to interfacing Node-RED with OpenRPA.
4.7.1.1. How to trigger an OpenRPA Workflow from Node-RED?¶
To do that, users can simply add a RPA Workflow and wire
it to either a workflow in
node or an inject
node.
They must select the Workflow
which will be run and the Robot
which
will execute it.
4.7.1.2. How to send a variable from OpenFlow Forms to Node-RED¶
This is thoroughly documented in the Using OpenFlow Forms section.
4.7.1.3. How to solve gyp ERR! stack Error: gyp failed with exit code: ?
when installing Node-RED from npm
?¶
Users must run the following command:
npm install -g node-gyp
node-gyp rebuild