Creating a new Dataset for exporting and importing data is a simple 3-step process.
Create a new Dataset
Configure the Process Message
Configure the Send Message
Setup an export schedule
Build
Datasets require the Global Utility in order for ServiceNow platform components to be created automatically.
Open Integration Designer and select or create an Integration.
From the main integration menu, select Datasets.
Click New.
Fill in the required information and choose your configuration.
Click Submit and view.
Creating a new Dataset will automatically configure several dependencies in your instance. These are:
A Message called Process_<table>
A Message called Send_<table>
A Scheduled Import Set with the same name as the Dataset
A Transform Map
Once the Dataset has been created, data is configured using Messages, Fields and Field Maps.
The Process Message handles the data mapping. Use the Fields list to configure which fields are being imported/exported.
From the Dataset details tab, navigate to the Process message and use the clickthrough button to open it.
From the Message, open the Fields list and add the fields that should be imported/exported. Each field will need to have a Dataset specific Field Map for transforming the data.
Ensure at least one field will Coalesce so the import can match to existing records.
Click Build Message.
Field Maps designed for an eBonding integration may not work for Datasets. Use the included "Dataset..." Field Maps or create your own using these as an example.
The Send Message handles the data to be imported or exported as an attachment.
The Path will need to be configured depending on where the data is being sent. If you are connecting to another ServiceNow instance using a ShareLogic endpoint, you will likely need to configure the Path to use the dataset web service at path "/dataset".
From the Dataset details tab, navigate to the Send message and use the clickthrough button to open it.
Open the Outbound > Settings page.
Configure the Path as required, e.g. "/dataset".
Click Save.
Datasets have the same schedule logic as any other scheduled job and can be configured to export data whenever needed.
Open the Dataset.
Click Scheduling to open the scheduling tab.
Configure the desired schedule.
Click Save.
Turning off the schedule will prevent data from being automatically exported.
Several important features are built into the integration build process for Datasets. Ensure you build the integration whenever you have finished adding or making major changes to a Dataset.
Scheduled Import Sets are created and updated.
Transform Maps are created and updated.
Inbound Datasets use ServiceNow Import Sets giving you the ability to view dataset imports as you would any other data import.
For detailed Activity Logs, navigate to the Transform page on a Dataset and enable the Import logging option.
Datasets with Import logging enabled will generate one Activity Log for each Import Set Row.
Dataset Requests are a record of when all Dataset imports and exports occurred. They are linked directly to Transactions that belong to a Bond which is exclusive to the Dataset. The records are automatically deleted when the corresponding Transaction is deleted.
The number of Transactions to keep during the daily cleanup can be managed using the Cleanup option which can be configured on each Dataset. This value specifies the number of transactions that should remain following the cleanup process. The default retention is 10 records.
Any Dataset Requests which are somehow orphaned, meaning they have no Dataset or no Transaction specified, will be removed based on the Orphan Record Cleanup Age x_snd_eb.orphan_record.cleanup.age
system property. The default orphan cleanup age is 7 days.
When a Dataset export runs, the system will automatically export the records in small batches depending on its configuration.
Batch size options are:
Max size: set the maximum file size to export. There is an internal maximum size of 5MB.
Max rows: limit batches to a specific number of records.
Max time: prevent long running exports by setting a maximum export time.
The final export size is determined by whichever limit is reached first. If more records are available to be processed, additional Dataset Requests will be created until all records have been exported.
New Field Maps can be configured for handling specific types of data in new ways. We recommend using the "Dataset" prefix when creating your new field map so it is easy to identify.
Dataset specific Field Maps are slightly different to eBonding integration field maps. The main difference is that the Stage to Target script is executed as a ServiceNow Transform Map script. This means that certain objects are not available, including transaction
, bond
, and request
.
These Field Maps should reference the Import Set Row fields using the standard $stage
object. Field name conversion from the Import Set Row source
object to the Field Map $stage
object is handled automatically, i.e. "u_" prefixes are removed.
The target
object which represents the record being created or updated can be used normally.
The log
object is updated to reference ws_console
meaning all logs will be written to the Activity Log generated for each imported record. If required, logs can continue to be written to the Import Set Row using the source
object.
By default, Datasets are automatically configured for both import and export. If you require a one-way data export without import, you can prevent inbound messages from being processed.
From Integration Designer, navigate to the Dataset in your integration.
Find the Send message and click the clickthrough button to open it.
Change the Direction from Bidirectional to Outbound.
Click Save.
Datasets are designed to work with one table at a time. If you need to import/export more than one table, setup a Dataset for each table.
Externally generated files can be processed by a Dataset. Files should match the file type the Dataset expects, e.g. CSV or JSON, and be streamed to the Dataset import endpoint. The maximum file size will depend on your instance configuration.
POST
https://<instance>.service-now.com/api/x_snd_eb/unifi/<api_name>
Send a file to be processed by a Dataset.
Example: https://acme.service-now.com/api/x_snd_eb/unifi/incident/dataset?file_name=cmdb_ci.csv&reference=Sync%20Server
It is possible to add support for ServiceNow IRE to your Dataset imports. This will push data through the reconciliation engine rather than directly updating the target records from the Import Set.
Follow these steps to configure your Dataset for use with IRE.
Create a new Data Source.
Create a Field Map for setting up the import for IRE.
Create a Field Map to give the data to IRE.
Add header and footer fields to the Dataset.
Modify Field Maps being used by the Dataset.
Remember to Build the Message/Integration when you have finished configuring the Fields.
When publishing Datasets (or any scheduled script) to an update set or application, especially if submitting to the ServiceNow Store, you need to ensure the "Run as" field is emptied since the user will likely not exist on the target system.
With the introduction of Datasets, it is now possible for large amounts of user, hardware, network and other supporting data to be sent to and received from remote systems. While this has been possible for some time using Pollers, they are not specific to handling large amounts of data in the way that Datasets are.
Datasets provide easy configuration that is supported by platform intelligence and automation, making it is incredibly easy to setup a robust and efficient mechanism for handling large sets of data.
Datasets require the latest Unifi Global Utility for them to build the necessary configuration correctly.
Each Dataset will create the following configuration:
The Send message is used for sending the data in an attachment to the other system. The name is automatically generating using the prefix "Send" and the name of the table.
The Process message is used for processing the record data both inbound and outbound. The name is automatically generating using the prefix "Process" and the name of the table.
The Import set table is used for staging inbound data. The name is automatically generated using the prefix "dataset" with the sys_id of the dataset.
The Scheduled import record is automatically created for processing import data. It is executed when an inbound Dataset Request has created the import set rows and is ready to be processed.
The Transform map and associated coalesce fields are used for processing the import set.
Data is automatically collected, transformed, packaged and sent to another system on a schedule. Each export creates one or more Dataset Requests depending on the number of records and a series of limits configurable on the Dataset.
Unifi automatically creates a Process message which can be used with outbound Fields and Field maps (via the Source to Stage and Stage to Request Message Scripts) to extract the data from the specified records.
The extracted data is written to an attachment which is then sent using the Send message which is also automatically created by Unifi.
When an inbound Send message (as specificed on the Dataset) with a data attachment is received, the attachment will be processed with each record being inserted into an import set table directly related to the Dataset. The import set table is automatically created and maintained during the Integration Build process.
Unifi uses ServiceNow Import Sets since they offer a well understood mechanism for importing data with performance benefits. The mapping is automatically managed by Unifi and handled through a transform script which uses inbound Fields and Field maps (via the Stage to Target Message Script) to transform the data and write it to the target records.
The process message related to the Dataset is used for inbound and outbound mapping and transformation.
Simply create the fields that you want to export/import and configure them in the same way you would for any other Unifi message. Note: the field maps will likely need to be specific to the dataset fields.
Use the Coalesce field (specific to Dataset messages) to indicate which field should be used for identifying existing records to update during inbound processing.
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
file_name*
String
Name of the file
reference*
String
The name of the dataset
Content-Type*
String
text/csv OR application/json
x-snd-eb-message-name*
String
The name of the Send message, e.g. Send_<table>