Dataset Extras

Import Logging

Inbound Datasets use ServiceNow Import Sets giving you the ability to view dataset imports as you would any other data import.

For detailed Activity Logs, navigate to the Transform page on a Dataset and enable the Import logging option.

Datasets with Import logging enabled will generate one Activity Log for each Import Set Row.

Dataset Cleanup

Dataset Requests are a record of when all Dataset imports and exports occurred. They are linked directly to Transactions that belong to a Bond which is exclusive to the Dataset. The records are automatically deleted when the corresponding Transaction is deleted.

The number of Transactions to keep during the daily cleanup can be managed using the Cleanup option which can be configured on each Dataset. This value specifies the number of transactions that should remain following the cleanup process. The default retention is 10 records.

Any Dataset Requests which are somehow orphaned, meaning they have no Dataset or no Transaction specified, will be removed based on the Orphan Record Cleanup Age x_snd_eb.orphan_record.cleanup.age system property. The default orphan cleanup age is 7 days.

Export Tuning

When a Dataset export runs, the system will automatically export the records in small batches depending on its configuration.

Batch size options are:

  • Max size: set the maximum file size to export. There is an internal maximum size of 5MB.

  • Max rows: limit batches to a specific number of records.

  • Max time: prevent long running exports by setting a maximum export time.

The final export size is determined by whichever limit is reached first. If more records are available to be processed, additional Dataset Requests will be created until all records have been exported.

Create a Dataset Field Map

New Field Maps can be configured for handling specific types of data in new ways. We recommend using the "Dataset" prefix when creating your new field map so it is easy to identify.

Dataset specific Field Maps are slightly different to eBonding integration field maps. The main difference is that the Stage to Target script is executed as a ServiceNow Transform Map script. This means that certain objects are not available, including transaction, bond, and request.

These Field Maps should reference the Import Set Row fields using the standard $stage object. Field name conversion from the Import Set Row source object to the Field Map $stage object is handled automatically, i.e. "u_" prefixes are removed.

The target object which represents the record being created or updated can be used normally.

The log object is updated to reference ws_console meaning all logs will be written to the Activity Log generated for each imported record. If required, logs can continue to be written to the Import Set Row using the source object.

Disable Dataset Import

By default, Datasets are automatically configured for both import and export. If you require a one-way data export without import, you can prevent inbound messages from being processed.

  1. From Integration Designer, navigate to the Dataset in your integration.

  2. Find the Send message and click the clickthrough button to open it.

  3. Change the Direction from Bidirectional to Outbound.

  4. Click Save.

Multiple Tables

Datasets are designed to work with one table at a time. If you need to import/export more than one table, setup a Dataset for each table.

Sending Data from a Third Party

Externally generated files can be processed by a Dataset. Files should match the file type the Dataset expects, e.g. CSV or JSON, and be streamed to the Dataset import endpoint. The maximum file size will depend on your instance configuration.

POST https://<instance><api_name>

Send a file to be processed by a Dataset.


Query Parameters




Name of the file



The name of the dataset





text/csv OR application/json



The name of the Send message, e.g. Send_<table>

Using IRE (Identification and Reconciliation Engine)

It is possible to add support for ServiceNow IRE to your Dataset imports. This will push data through the reconciliation engine rather than directly updating the target records from the Import Set.

Follow these steps to configure your Dataset for use with IRE.

  1. Create a new Data Source.

  2. Create a Field Map for setting up the import for IRE.

  3. Create a Field Map to give the data to IRE.

  4. Add header and footer fields to the Dataset.

  5. Modify Field Maps being used by the Dataset.

Create a new Data Source

A new Data Source is required for IRE rules to be configured. You can add this manually to the sys_choice table on the discovery_source field, or create it using this fix script.

We recommend the data source name matches the integration name. This is how the IRE header Field Map is setup in this example.

If you are packaging Datasets in Scoped Applications, you should create a Fix Script that runs when the application is installed.

var dsUtil = new global.CMDBDataSourceUtil();
dsUtil.addDataSource("ACME CMDB");
Create a Field Map for setting up the import for IRE.


Use IRE - Header


Sets up a Dataset Import for using the ServiceNow IRE (Identification and Reconciliation Engine) to import CMDB data.

Do not map fields using this field map to a CMDB attribute. Ensure that it is the first field to be processed by setting the field order to an appropriately small number.

This field map will create an $ire_values object on the $stage which will store all the values to give to IRE.

Stage to Target

// Setup an IRE item for this record
$stage.$ire_item = {
  className: target.getTableName(),
  internal_id: '' + target.sys_id,
  values: {},
  lookup: [],
  sys_object_source_info: {
    source_feed: '$[] Dataset ' + target.getTableName(),
    source_name: '$[]',
    source_native_key: '' + target.sys_id,
    source_recency_timestamp: '' + new GlideDateTime()
Create a Field Map to give the data to IRE.


Use IRE - Footer


Use the ServiceNow IRE (Identification and Reconcilliation Engine) to import CMDB data.

Do not map fields using this field map to a CMDB attribute. Ensure that it is the last field to be processed by setting the field order to an appropriately large number.

This field map will convert the inbound payload to the IRE format, call the createOrUpdateCI() function, and ignore the current import set row.

Stage to Target

var input = JSON.stringify({ items: [$stage.$ire_item] }, null, 2);

x_snd_eb.ws_console.debug('Payload for IRE is:\n\n %0', [input]);

var source_name = $stage.$ire_item.sys_object_source_info.source_name;
var output = sn_cmdb.IdentificationEngine.createOrUpdateCI(source_name, input);

x_snd_eb.ws_console.debug('Output from IRE:\n\n {0}', [output]);

ignore = true;
Modify Field Maps

Each Field needs to registered for processing by IRE. This is achieved by adding the following snippets (depending on field type) to the end of the Stage to Target script for each Field Map being used in the Dataset.

Standard Field Map - Stage to Target

Most fields only need to tell IRE the field name and the value.

// Store field values for IRE
if ($stage.$ire_item) {
  $stage.$ire_item.values['$[field.element]'] = target.getDisplayValue('$[field.element]');

Reference Field Map - Stage to Target

Reference fields need an additional lookup object to pass into IRE.

// Store field values for IRE
if ($stage.$ire_item) {
  $stage.$ire_item.values['$[field.element]'] = target.getDisplayValue('$[field.element]');

  // Add the lookup object for reference fields
    className: target.$[field.element].getReferenceTable(), 
    values: {
      name: '' + target.$[field.element].getDisplayValue()

Remember to Build the Message/Integration when you have finished configuring the Fields.

Publishing with Run As user

When publishing Datasets (or any scheduled script) to an update set or application, especially if submitting to the ServiceNow Store, you need to ensure the "Run as" field is emptied since the user will likely not exist on the target system.