# Exports

## List exports

> Returns a list of all exports configured in the account.\
> If no exports exist in the account, a 204 response with no body will be returned.<br>

````json
{"openapi":"3.1.0","info":{"title":"Exports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"parameters":{"Include":{"name":"include","in":"query","required":false,"description":"Comma-separated list of fields to project into each returned record.\nTriggers **summary projection** on supported list endpoints: the server\nreturns a minimal identity set for each record (`_id`, `name`, plus a\nresource-specific always-on set like `adaptorType` on exports/imports,\nor richer defaults on `ashares`, `audit`, `httpconnectors`, `transfers`,\netc.) and adds any listed fields that exist on the record. Listed fields\nthe record doesn't carry are silently dropped.\n\nDot notation is supported for projecting nested sub-fields — e.g.\n`include=ftp.directoryPath` on `/v1/exports` returns just that nested\nfield inside `ftp` for FTP-type exports (and omits `ftp` entirely for\nnon-FTP exports).\n\nRules:\n- Value regex is `{a-z A-Z . _}` (letters, dots, underscores) plus the\n  comma separator; digits are also accepted in practice. Any other\n  character returns **400 `invalid_query_params`**.\n- Empty value (`include=`) or bare `include` is ignored — the full\n  default record is returned.\n- `include` and `exclude` are **mutually exclusive**. Passing both\n  returns **400 `invalid_query_params`**: *\"Please provide either\n  include or exclude param in the request query and not both.\"*\n- Array-bracket syntax (`include[]=...`) is not supported and can return\n  a 500.\n- Only list endpoints honor projection — on GET-by-id the parameter is\n  silently ignored.\n- A small set of list endpoints explicitly reject both `include` and\n  `exclude` with **400 `invalid_query_params`** and a message of the form\n  *\"Include or exclude query params are not applicable for `<resource>`\n  resource.\"* Known rejections: `/v1/ediprofiles`, `/v1/environments`,\n  `/v1/iClients`, `/v1/lookupcaches`, `/v1/tags`.","schema":{"type":"string"}},"Exclude":{"name":"exclude","in":"query","required":false,"description":"Comma-separated list of fields to remove from the default response on\nsupported list endpoints. Unlike `include`, `exclude` does NOT trigger\nsummary projection — callers get the standard full-record shape with the\nnamed fields stripped out.\n\nRules:\n- Value regex is `{a-z A-Z . _}` (letters, dots, underscores) plus the\n  comma separator; digits are also accepted in practice. Any other\n  character returns **400 `invalid_query_params`**.\n- Empty value (`exclude=`) is ignored.\n- Certain protected identity fields **cannot be stripped** — e.g.\n  `exclude=name` on `/v1/exports` is silently ignored and `name` remains\n  in the response. Protected sets vary per resource.\n- `include` and `exclude` are **mutually exclusive**. Passing both\n  returns **400 `invalid_query_params`**: *\"Please provide either\n  include or exclude param in the request query and not both.\"*\n- Only list endpoints honor stripping — on GET-by-id the parameter is\n  silently ignored.\n- A small set of list endpoints explicitly reject both `include` and\n  `exclude` with **400 `invalid_query_params`** and a message of the form\n  *\"Include or exclude query params are not applicable for `<resource>`\n  resource.\"* Known rejections: `/v1/ediprofiles`, `/v1/environments`,\n  `/v1/iClients`, `/v1/lookupcaches`, `/v1/tags`.","schema":{"type":"string"}}},"schemas":{"Response":{"type":"object","description":"Complete export object as returned by the API","allOf":[{"$ref":"#/components/schemas/Request"},{"$ref":"#/components/schemas/ResourceResponse"},{"$ref":"#/components/schemas/IAResourceResponse"},{"type":"object","properties":{"aiDescription":{"$ref":"#/components/schemas/AIDescription"},"apim":{"$ref":"#/components/schemas/APIM"},"apiIdentifier":{"type":"string","readOnly":true,"description":"API identifier assigned to this export."},"_sourceId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the source resource this export was created from."},"_templateId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the template used to create this export."},"draft":{"type":"boolean","readOnly":true,"description":"Indicates whether this export is in draft state."},"draftExpiresAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the draft version of this export expires."},"debugUntil":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp until which debug logging is enabled for this export."}}}]},"Request":{"type":"object","description":"Fields that can be sent when creating or updating an export","properties":{"name":{"type":"string","description":"Descriptive identifier for the export resource in human-readable format.\n\nThis string serves as the primary display name for the export across the application UI and is used in:\n- API responses when listing exports\n- Error and audit logs for traceability\n- Flow builder UI components\n- Job history and monitoring dashboards\n\nWhile not required to be globally unique in the system, using descriptive, unique names is strongly recommended\nfor clarity when managing multiple integrations. The name should indicate the data source and purpose.\n\nMaximum length: 255 characters\nAllowed characters: Letters, numbers, spaces, and basic punctuation\n"},"description":{"type":"string","description":"Optional free-text field that provides additional context about the export's purpose and functionality.\n\nWhile not used for operational functionality in the API, this field serves several important purposes:\n- Helps document the intended data flow for this export\n- Provides context for other developers and systems interacting with this resource\n- Appears in the admin UI and export listings for easier identification\n- Can be used by AI agents to better understand the export's purpose when making recommendations\n\nBest practice is to include information about:\n- The source system and data being exported\n- The intended destination for this data\n- Any special filtering or business rules applied\n- Dependencies on other systems or processes\n\nMaximum length: 10240 characters\n","maxLength":10240},"_connectionId":{"format":"objectId","type":"string","description":"Reference to the connection resource that this export will use to access the external system.\n\nThis field contains the unique identifier of a connection resource that must exist in the system prior to creating the export.\nThe connection provides:\n- Authentication credentials and methods for the external system\n- Base URL and connectivity settings\n- Rate limiting and retry configurations\n- Connection-specific headers and parameters\n\nThe connection type must be compatible with the adaptorType specified for this export.\nFor example, if adaptorType is \"HTTPExport\", _connectionId must reference a connection with type \"http\".\n\nThis field is not required for webhook/listener exports.\n\nFormat: 24-character hexadecimal string\n"},"adaptorType":{"type":"string","description":"Specifies the underlying technology adapter that processes this export's operations.\n\nThis field determines:\n- Which connection types are compatible with this export\n- Which API endpoints and protocols will be used\n- Which export-specific configuration objects must be provided\n- The available features and capabilities of the export\n\nThe value must match an available adapter in the system and should correspond to the\nexternal system being accessed. For example:\n- \"HTTPExport\" for generic REST/SOAP APIs\n- \"SalesforceExport\" for Salesforce-specific operations\n- \"NetSuiteExport\" for NetSuite-specific operations\n- \"FTPExport\" for file transfers via FTP/SFTP\n- \"WebhookExport\" for realtime event listeners that receive data via incoming HTTP requests.\n\nWhen creating an export, this field must be set correctly and cannot be changed afterward\nwithout creating a new export resource.\n\nIMPORTANT: When using a specific adapter type (e.g., \"SalesforceExport\"), you must also\nprovide the corresponding configuration object (e.g., \"salesforce\").\n","enum":["HTTPExport","FTPExport","AS2Export","S3Export","NetSuiteExport","SalesforceExport","JDBCExport","RDBMSExport","MongodbExport","DynamodbExport","WrapperExport","SimpleExport","WebhookExport","FileSystemExport"]},"type":{"type":"string","description":"Defines the fundamental operational mode of the export resource. This field determines:\n- What data is extracted and how\n- Which configuration objects are required\n- How the export appears and functions in the flow builder UI\n- The export's scheduling and execution behavior\n\n**Export types and their configurations**\n\n**Standard Export (undefined/null)**\n- **Behavior**: Retrieves all available records from the source system or structured file data that needs parsing. Default behavior is to get all records from the source system.\n- **UI Appearance**: \"Export\", \"Lookup\", or \"Transfer\" (depending on configuration)\n- **Use Case**: General purpose data extraction, full data synchronization, or structured file parsing (CSV, XML, JSON, etc.)\n- **Important Note**: For file exports that PARSE file contents into records (e.g., CSV files from NetSuite file cabinet), use this standard export type (null/undefined) with the connector's file configuration (e.g., netsuite.file). Do NOT use type=\"blob\" for parsed file exports.\n\n**\"delta\"**\n- **Behavior**: Retrieves only records changed since the last execution\n- **UI Appearance**: \"Export\" or \"Lookup\" (based on isLookup field value)\n- **Required Config**: Must provide the \"delta\" object with dateField configuration\n- **Use Case**: Incremental data synchronization, change detection\n- **Dependencies**: Requires a system that supports timestamp-based filtering\n\n**\"test\"**\n- **Behavior**: Retrieves a limited subset of records (for testing purposes)\n- **UI Appearance**: \"Export\" or \"Lookup\" (based on isLookup field value)\n- **Required Config**: Must provide the \"test\" object with limit configuration\n- **Use Case**: Integration development, testing, and validation\n\n**\"once\"**\n- **Behavior**: Retrieves records one time and marks them as processed in the source\n- **UI Appearance**: \"Export\" or \"Lookup\" (based on isLookup field value)\n- **Required Config**: Must provide the \"once\" object with booleanField configuration\n- **Use Case**: One-time exports, ensuring records aren't processed twice\n- **Dependencies**: Requires a system with updateable boolean/flag fields\n\n**\"blob\"**\n- **Behavior**: Retrieves raw files without parsing them into structured data records. The file content is transferred as-is without any parsing or transformation.\n- **UI Appearance**: \"Transfer\" flow step\n- **Required Config**: Configuration varies by connector (e.g., filepath for FTP, http.type=\"file\" for HTTP, netsuite.blob for NetSuite)\n- **Use Case**: Raw file transfers for binary files (images, PDFs, executables) where file content should NOT be parsed into data records\n- **Important Note**: Do NOT use \"blob\" when you want to parse file contents into records. For file parsing (CSV, XML, JSON files), leave type as null/undefined and configure the connector's file object (e.g., netsuite.file for NetSuite). The \"blob\" type is specifically for transferring files without parsing them.\n\n**\"webhook\"**\n- **Behavior**: Creates an endpoint that listens for incoming HTTP requests\n- **UI Appearance**: \"Listener\" flow step\n- **Required Config**: Must provide the \"webhook\" object with security configurations\n- **Use Case**: Real-time integration, event-driven architecture\n- **Dependencies**: Requires external system capable of making HTTP calls\n\n**\"distributed\"**\n- **Behavior**: Creates an endpoint that listens for incoming requests from NetSuite or Salesforce\n- **UI Appearance**: \"Listener\" flow step\n- **Required Config**: Must provide the \"distributed\" object with security configurations\n- **Use Case**: Real-time integration, event-driven architecture for NetSuite or Salesforce\n- **Dependencies**: Requires NetSuite or Salesforce to be configured to send events to the endpoint\n\n**\"simple\"**\n- **Behavior**: Allows for direct file uploads via the data loader UI\n- **UI Appearance**: \"Data loader\" flow step\n- **Required Config**: Must provide the \"simple\" object with file format configuration\n- **Use Case**: Manual data uploads, user-driven data integration\n\nThe value directly affects which configuration objects must be provided in the export resource.\nFor example, if type=\"delta\", you must include a valid \"delta\" object in your configuration.\n","enum":["webhook","test","delta","once","tranlinedelta","simple","blob","distributed","stream"]},"pageSize":{"type":"integer","description":"Controls the number of records in each data page when streaming data between systems.\n\nThis field directly impacts how data is streamed from the source to destination system:\n- Records are exported in batches (pages) of this size\n- Each page is immediately sent to the destination system upon completion\n- Pages are capped at a maximum size of 5 MB regardless of record count\n- Processing continues with the next page until all data is transferred\n\nConsiderations for setting this value:\n- The destination system's API often imposes limits on batch sizes\n  (e.g., NetSuite and Salesforce have specific record limits per API call)\n- Larger values improve throughput for simple records but may cause timeouts with complex data\n- Smaller values provide more granular error recovery but increase the number of API calls\n- Finding the optimal value typically requires balancing source system export speed with\n  destination system import capacity\n\nThe value must be a positive integer. If not specified, the default value is 20.\nThere is no built-in maximum value, but practical limits are determined by:\n1. The 5 MB maximum page size limit\n2. The destination system's API constraints\n3. Memory and performance considerations\n","default":20},"dataURITemplate":{"type":"string","description":"Defines a template for generating direct links to records in the source application's UI.\n\nThis field uses handlebars syntax to create dynamic URLs or identifiers based on the exported data.\nThe template is evaluated for each record processed by the export, and the resulting URL is:\n- Stored with error records in the job history database\n- Displayed in the error logs and job monitoring UI\n- Available to downstream steps via the errorContext object\n\nThe template can reference any field in the exported record using the handlebars pattern:\n{{record.fieldName}}\n\nCommon patterns by system type:\n- Salesforce: \"https://my.salesforce.com/lightning/r/Contact/{{record.Id}}/view\"\n- NetSuite: \"https://system.netsuite.com/app/common/entity/custjob.nl?id={{record.internalId}}\"\n- Shopify: \"https://your-store.myshopify.com/admin/customers/{{record.id}}\"\n- Generic APIs: \"{{record.id}}\" or \"{{record.customer_id}}, {{record.email}}\"\n\nThis field is optional but recommended for improved error handling and debugging.\n"},"traceKeyTemplate":{"type":"string","description":"Defines a template for generating unique identifiers for each record processed by this export.\n\nThis field allows you to override the system's default record identification logic by specifying\nexactly which field(s) should be used to uniquely identify each record. The trace key is used to:\n- Track records through the entire integration process\n- Identify duplicate records in the job history\n- Match updated records to previously processed ones\n- Generate unique references in error reporting\n\nThe template uses handlebars syntax and can reference:\n- Single fields: {{record.id}}\n- Combined fields: {{join \"_\" record.customerId record.orderId}}\n- Modified fields: {{lowercase record.email}}\n\nIf a transformation is applied to the exported data before the trace key is evaluated,\nfield references should omit the \"record.\" prefix (e.g., {{id}} instead of {{record.id}}).\n\nIf not specified, the system attempts to identify a unique field in each record automatically,\nbut this may not always select the optimal field for identification.\n\nMaximum length of generated trace keys: 512 characters\n"},"oneToMany":{"$ref":"#/components/schemas/OneToMany"},"pathToMany":{"$ref":"#/components/schemas/PathToMany"},"isLookup":{"type":"boolean","description":"Controls whether this export operates as a lookup resource in integration flows.\n\nWhen set to true, this export's behavior fundamentally changes:\n- It expects and requires input data from a previous flow step\n- It uses input data to dynamically parameterize the export operation\n- The system injects input record fields into API requests via handlebars templates\n- Flow execution waits for this step to complete before proceeding\n- Results are directly passed to subsequent steps\n\nLookup exports are typically used to:\n- Retrieve additional details about records processed earlier in the flow\n- Find matching records in a target system for reference or update operations\n- Enrich data with information from external services\n- Validate data against reference sources\n\nAPI behavior differences when true:\n- Request templating uses both record context and other handlebars variables\n- Export is executed once per input record (or batch, depending on configuration)\n- Rate limiting and concurrency controls apply differently\n\nWhen false (default), the export operates in standard extraction mode, pulling data\nindependently without requiring input from previous flow steps.\n"},"groupByFields":{"$ref":"#/components/schemas/GroupBy"},"delta":{"$ref":"#/components/schemas/Delta"},"test":{"$ref":"#/components/schemas/Test"},"once":{"$ref":"#/components/schemas/Once"},"webhook":{"$ref":"#/components/schemas/Webhook"},"distributed":{"$ref":"#/components/schemas/Distributed"},"filesystem":{"$ref":"#/components/schemas/FileSystem"},"simple":{"type":"object","description":"Configuration for data loader exports that only run in data loader specific flows.\nNote: This field and all its properties are only relevant when the 'type' field is set to 'simple'.\n","properties":{"file":{"$ref":"#/components/schemas/File"}}},"http":{"$ref":"#/components/schemas/Http"},"file":{"$ref":"#/components/schemas/File"},"salesforce":{"$ref":"#/components/schemas/Salesforce"},"as2":{"$ref":"#/components/schemas/AS2"},"dynamodb":{"$ref":"#/components/schemas/DynamoDB"},"ftp":{"$ref":"#/components/schemas/FTP"},"jdbc":{"$ref":"#/components/schemas/JDBC"},"mongodb":{"$ref":"#/components/schemas/MongoDB"},"netsuite":{"$ref":"#/components/schemas/NetSuite"},"rdbms":{"$ref":"#/components/schemas/RDBMS"},"s3":{"$ref":"#/components/schemas/S3"},"wrapper":{"$ref":"#/components/schemas/Wrapper"},"parsers":{"$ref":"#/components/schemas/Parsers"},"filter":{"allOf":[{"description":"Configuration for selectively processing records from an export based on their field values.\nThis object enables precise control over which records continue through the flow.\n\n**Filter behavior**\n\nWhen configured, the filter is applied immediately after records are retrieved from the source system:\n- Records that match the filter criteria continue through the flow\n- Records that don't match are silently dropped\n- No partial record processing is performed\n\n**Available filter fields**\nThe fields available for filtering are the data fields from each record retrieved by the export.\n"},{"$ref":"#/components/schemas/Filter"}]},"inputFilter":{"allOf":[{"description":"Configuration for selectively processing input records in a lookup export.\n\nThis filter is only relevant for exports where `isLookup` is set to `true`, meaning\nthe export is being used as a flow step to retrieve additional data for records\nprocessed in previous steps.\n\n**Input filter behavior**\n\nWhen configured in a lookup export, this filter is applied to the incoming records\nbefore they are used to query the external system:\n- Only input records that match the filter criteria will trigger lookup operations\n- Records that don't match will pass through the step without being enriched\n- This can significantly improve performance by reducing unnecessary API calls\n\n**Use cases**\n\nCommon scenarios for using inputFilter include:\n- Only looking up additional data for records that meet certain criteria\n- Preventing API calls for records that already have the required data\n- Implementing conditional lookup logic based on record properties\n- Reducing API call volume to stay within rate limits\n\n**Available filter fields**\nThe fields available for filtering are the data fields from the input records\npassed to this lookup export from previous flow steps.\n"},{"$ref":"#/components/schemas/Filter"}]},"mappings":{"allOf":[{"description":"Field mapping configurations applied to the input records of a\nlookup export before the lookup HTTP request is made.\n\n**When this field is valid**\n\nCeligo only supports `mappings` on exports that meet **both**\nof the following conditions:\n\n1. `isLookup` is `true` (the export is being used as a\n   lookup step, not a source export), AND\n2. `adaptorType` is `\"HTTPExport\"` (generic REST/SOAP\n   HTTP lookup — not NetSuite, Salesforce, RDBMS, file-based,\n   or any other adaptor).\n\nFor any other combination (source exports, non-HTTP lookup\nexports), do not set this field.\n\n**Behavior when valid**\n\nWhen used on a lookup HTTP export, `mappings` transforms\neach incoming record from the upstream flow step before the\nlookup HTTP call is made:\n\n- Input records are reshaped according to the mapping rules.\n- The transformed record flows into the HTTP request — either\n  directly as the request body, or further shaped by an\n  `http.body` Handlebars template which then renders\n  against the post-mapped record.\n- Useful when the lookup target API expects a request\n  structure that differs from the upstream record shape.\n"},{"$ref":"#/components/schemas/Mappings"}]},"transform":{"allOf":[{"description":"Data transformation configuration for reshaping records during export operations.\n\n**Export-specific behavior**\n\n**Source Exports**: Transforms records retrieved from the external system before they are passed to downstream flow steps.\n\n**Lookup Exports (isLookup: true)**: Transforms the lookup results returned by the external system.\n\n**Critical requirement for lookups**\n\nFor NetSuite and most other API-based lookups, this field is **ESSENTIAL**. Raw lookup results often come in nested or complex formats that differ from what the flow requires. You **MUST** use a transform to:\n1. Flatten nested structures (e.g., `results[0].id` -> `id`)\n2. Map specific fields to the top level\n3. Handle empty results gracefully\n\nNote that transformed results are not automatically merged back into source records - merging is handled separately by the 'response mapping' configuration in your flow definition.\n"},{"$ref":"#/components/schemas/Transform"}]},"hooks":{"type":"object","description":"Defines custom JavaScript hooks that execute at specific points during the export process.\n\nThese hooks allow for programmatic intervention in the data flow, enabling custom transformations,\nvalidations, filtering, and error handling beyond what's possible with standard configuration.\n","properties":{"preSavePage":{"type":"object","description":"Hook that executes after records are retrieved from the source system but before\nthey are sent to downstream flow steps.\n\nThis hook can transform, filter, validate, or enrich each page of data before it\nenters subsequent flow steps. Common uses include flattening nested data structures,\nremoving unwanted records, or adding computed fields.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId.\n"},"_scriptId":{"type":"string","format":"objectId","description":"Reference to a predefined script resource containing hook functions.\n\nThe referenced script should contain the function specified in the\n'function' property.\n"},"_stackId":{"type":"string","format":"objectId","description":"Reference to the stack resource associated with this hook.\n\nUsed when the hook logic is part of a stack deployment.\n"},"configuration":{"type":"object","description":"Custom configuration object passed to the hook function.\n\nThis allows passing static parameters or settings to the hook script, making the script\nreusable across different exports with different configurations.\n"}}}}},"settingsForm":{"$ref":"#/components/schemas/Form"},"settings":{"$ref":"#/components/schemas/Settings"},"mockOutput":{"$ref":"#/components/schemas/MockOutput"},"_ediProfileId":{"type":"string","format":"objectId","description":"Reference to an EDI profile that this export will use for parsing X12 EDI documents.\n\nThis field contains the unique identifier of an EDI profile resource that must exist\nin the system prior to creating or updating the export. For parsing operations, the\nEDI profile provides essential settings such as:\n- Envelope-level specifications for X12 EDI documents (ISA and GS qualifiers)\n- Trading partner identifiers and qualifiers needed for validation\n- Delimiter configurations used to properly parse the document structure\n- Version information to ensure correct segment and element interpretation\n- Validation rules to verify EDI document compliance with standards\n\nIn the export context, an EDI profile is specifically required when:\n- Parsing incoming EDI documents into a structured JSON format\n- Extracting data elements from raw EDI files\n- Validating incoming EDI document structure against trading partner requirements\n- Converting EDI segments and elements into a format usable by downstream flow steps\n\nThe centralized profile approach ensures parsing consistency across all exports\nand prevents scattered configuration of parsing rules across multiple resources.\n\nFormat: 24-character hexadecimal string\n"},"_postParseListenerId":{"type":"string","format":"objectId","description":"Reference to a webhook export that will be automatically invoked after EDI parsing operations.\n\nThis field contains the unique identifier of another export resource (of type \"webhook\")\nthat will be called when an EDI file is processed, regardless of parsing success or failure.\n\n**Invocation behavior**\n\nThe listener is invoked once per file, with the following behaviors:\n\n| Scenario | Behavior |\n| --- | --- |\n| Successfully parsed EDI file | The listener is invoked with the parsed payload, with no error fields present |\n| Unable to parse EDI file | The listener is invoked with the payload and error information in the payload |\n\n**Supported adapter types**\n\nCurrently, this functionality is only supported for:\n- AS2Export (when parsing EDI files)\n- FTPExport (when parsing EDI files)\n\nSupport for additional adapters will be added in future releases.\n\n**Primary use case**\n\nThe primary purpose of this field is to enable automatic sending of functional acknowledgements\n(such as 997 or 999) after receiving EDI documents, whether the parse was successful or not.\nThis allows for immediate feedback to trading partners about document receipt and processing status.\n\nFormat: 24-character hexadecimal string\n"},"preSave":{"$ref":"#/components/schemas/PreSave"},"assistant":{"type":"string","description":"Identifier for the connector assistant used to configure this export."},"assistantMetadata":{"type":"object","additionalProperties":true,"description":"Metadata associated with the connector assistant configuration."},"sampleData":{"type":"string","description":"Sample data payload used for previewing and testing the export."},"sampleHeaders":{"type":"array","description":"Sample HTTP headers used for previewing and testing the export.","items":{"type":"object","properties":{"name":{"type":"string","description":"Header name."},"value":{"type":"string","description":"Header value."}}}},"sampleQueryParams":{"type":"array","description":"Sample query parameters used for previewing and testing the export.","items":{"type":"object","properties":{"name":{"type":"string","description":"Query parameter name."},"value":{"type":"string","description":"Query parameter value."}}}}},"required":["name"]},"OneToMany":{"type":"boolean","description":"Controls whether the resource treats child records within parent records as the primary data units.\n\n**Important: this is not for specifying where records are in an api response**\n\nIf you need to tell an export where to find the array of records in the HTTP response\nbody (e.g. \"the records are at data.items\"), use `http.response.resourcePath` instead.\n`oneToMany` serves a completely different purpose — it operates on records that have\nalready been extracted from the response.\n\n**What oneToMany actually does**\n\nWhen set to true, this field fundamentally changes how record data is processed:\n- The system will \"unwrap\" nested child records from their parent containers\n- Each child record becomes a separate output record for downstream processing\n- The pathToMany field must be set to indicate where these child records are located\n- Parent record fields can still be accessed via a special \"parent\" context\n\nThis is typically used on **lookup exports** (isLookup: true) or **imports** where\nthe incoming records contain nested arrays that need to be fanned out.\n\nCommon scenarios for enabling this option:\n- Processing order line items individually from an order export\n- Handling invoice line items from an invoice export\n- Processing individual transaction lines from journal entries\n- Extracting address records from customer exports\n\nThis setting applies for the duration of the current flow step only and does not affect\nhow data is stored or structured in other flow steps.\n\nIf false (default), the resource processes each top-level record as a single unit.\n","default":false},"PathToMany":{"type":"string","description":"Specifies the JSON path to child records when oneToMany mode is enabled.\n\nThis field is only used when oneToMany is set to true. It defines the exact location\nof child records within the parent record structure using dot notation:\n\n- Simple path: \"items\" for a direct child array field\n- Nested path: \"lines.lineItems\" for a more deeply nested array\n- Multi-level: \"details.items.subitems\" for deeply nested structures\n\nThe system uses this path to:\n- Locate the array of child records within each parent record\n- Extract each array element as a separate record for processing\n- Make both the child record data and parent context available to downstream steps\n\nImportant considerations:\n- The path must point to an array field\n- For row-based data (i.e. where Celigo models this via an array or arrays of objects), this field is not required\n- If the path is invalid or doesn't exist, the resource will report success but process zero records\n- Maximum path depth: 10 levels\n\nThis field must contain a valid JSON path expression using dot notation.\n"},"GroupBy":{"type":"array","description":"Specifies which fields to use for grouping records in the export results. When configured, records with\nthe same values in these fields will be grouped together and treated as a single record by downstream\nsteps in your flow.\n\nFor example:\n- Group sales orders by customer ID to process all orders for each customer together\n- Group journal entries by accounting period to consolidate related transactions\n- Group inventory items by location to process inventory by warehouse\n\nWhen grouping is used, the export's page size determines the maximum number of groups per page, not individual\nrecords. Note that effective grouping typically requires that records with the same group field values appear\ntogether in the export data.\n","items":{"type":"string"}},"Delta":{"type":"object","description":"Configuration object for incremental data exports that retrieve only changed records.\n\nThis object is REQUIRED when the export's type field is set to \"delta\" and should not be\nincluded for other export types. Delta exports are designed for efficient synchronization\nby retrieving only records that have been created or modified since the last execution.\n\n**Default cutoff behavior (NO USER-SUPPLIED CUTOFF)**\nIf the user prompt does not specify a cutoff timestamp, delta exports MUST default to using\nthe platform-managed *last successful run* timestamp. In integrator.io this is exposed to\nHTTP exports and scripts as the `{{lastExportDateTime}}` variable.\n\n- First run: behaves like a full export (no cutoff available yet)\n- Subsequent runs: uses `{{lastExportDateTime}}` as the lower bound (cutoff)\n\nFor optimal AI agent implementation, consider these guidelines:\n\n1. Primary configuration method depends on adapter type:\n    - For HTTP exports: Use {{lastExportDateTime}} variable in relativeURI or body\n    - For specific application adapters: Use dateField to specify timestamp fields\n\n2. The system automatically maintains the last successful run timestamp\n    - No need to store or manage timestamps in your own code\n    - First run fetches all records (equivalent to a standard export)\n    - Subsequent runs use this timestamp as the starting point\n\n3. Error handling and recovery:\n    - If an export fails, the next run uses the last successful timestamp\n    - Records created/modified during a failed run will be included in the next run\n    - The lagOffset field can be used to handle edge cases\n","properties":{"dateField":{"type":"string","description":"Specifies one or more timestamp fields to filter records by modification date.\n\n**Field behavior**\n\nThis field determines which record timestamp(s) are compared against the last successful run time\nto identify changed records. Key characteristics:\n\n- REQUIRED for most adapter types (except HTTP and REST where this field is not supported)\n- Can reference a single field or multiple comma-separated fields\n- Field(s) must exist in the source system and contain valid date/time values\n- When multiple fields are specified, they are processed sequentially\n\n**Implementation patterns**\n\n**Single Field Pattern**\n```\n\"dateField\": \"lastModifiedDate\"\n```\n- Records where lastModifiedDate > last run time are exported\n- Most common pattern, suitable for most applications\n- Works when a single field reliably tracks all changes\n\n**Multiple Field Pattern**\n```\n\"dateField\": \"createdAt,lastModified\"\n```\n- First exports records where createdAt > last run time\n- Then exports records where lastModified > last run time\n- Useful when different operations update different timestamp fields\n- Handles cases where some records only have creation timestamps\n\n**Critical adaptor-specific instruction:**\n- The adaptor type is HTTP. For HTTP exports, the \"dateField\" property MUST NOT be included in the delta configuration.\n- HTTP exports use the {{lastExportDateTime}} variable directly in the relativeURI or body instead of dateField.\n- DO NOT include \"dateField\" in your response. If you include it, the configuration will be invalid.\n\nExample HTTP query with implicit delta:\n```\n\"/api/v1/users?modified_since={{lastExportDateTime}}\"\n\nExample (Business Central) newly created records:\n```\n\"/businesscentral/companies({{record.companyId}})/customers?$filter=systemCreatedAt gt {{lastExportDateTime}}\"\n```\n\nFor Salesforce, this field is required and has the following default values:\n- LastModifiedDate\n- CreatedDate\n- SystemModstamp\n- LastActivityDate\n- LastViewedDate\n- LastReferencedDate\nAlso, any custom fields that are not listed above but are timestamp fields will be added to the default values.\n```\n"},"dateFormat":{"type":"string","description":"Defines the date/time format expected by the source system's API.\n\n**Field behavior**\n\nThis field controls how the system formats the timestamp used for filtering:\n\n- OPTIONAL: Only needed when the source system doesn't support ISO8601\n- Default: ISO8601 format (YYYY-MM-DDTHH:mm:ss.sssZ)\n- Uses Moment.js formatting tokens\n- Directly affects the format of {{lastExportDateTime}} when used in HTTP requests\n\n**Implementation patterns**\n\n**Standard Date Format**\n```\n\"dateFormat\": \"YYYY-MM-DD\"  // 2023-04-15\n```\n- For APIs that accept date-only values\n- Will truncate time portion (potentially creating a wider filter window)\n\n**Custom DateTime Format**\n```\n\"dateFormat\": \"MM/DD/YYYY HH:mm:ss\"  // 04/15/2023 14:30:00\n```\n- For APIs with specific formatting requirements\n- Especially common with older or proprietary systems\n\n**Localized Format**\n```\n\"dateFormat\": \"DD-MMM-YYYY HH:mm:ss\"  // 15-Apr-2023 14:30:00\n```\n- For systems requiring locale-specific representations\n- Often needed for ERP systems or regional applications\n\nLeave this field unset unless the source system explicitly requires a non-ISO8601 format.\n"},"lagOffset":{"type":"integer","description":"Specifies a time buffer (in milliseconds) to account for system data propagation delays.\n\n**Field behavior**\n\nThis field addresses synchronization issues caused by replication or indexing delays:\n\n- OPTIONAL: Only needed for systems with known data visibility delays\n- Value is SUBTRACTED from the last successful run timestamp\n- Creates an overlapping window to catch records that were being processed\n  during the previous export\n- Measured in milliseconds (1000ms = 1 second)\n\n**Implementation pattern**\n\nThe formula for the effective filter date is:\n```\neffectiveFilterDate = lastSuccessfulRunTime - lagOffset\n```\n\n**Common values**\n\n- 15000 (15 seconds): Typical for systems with short indexing delays\n- 60000 (1 minute): Common for systems with moderate replication lag\n- 300000 (5 minutes): For systems with significant processing delays\n\n**Diagnosis**\n\nThis field should be configured when you observe:\n- Records occasionally missing from delta exports\n- Records created/modified near the export run time being skipped\n- Inconsistent results between runs with similar data changes\n\nIMPORTANT: Setting this value too high decreases efficiency by processing\nredundant records. Set only as high as needed to avoid missed records.\n"}}},"Test":{"type":"object","description":"Configuration object for limiting data volume during development and testing.\n\nThis object is REQUIRED when the export's type field is set to \"test\" and should not be\nincluded for other export types. Test exports are designed to safely retrieve small data\nsamples without processing full datasets, making them ideal for development and validation.\n\nFor optimal AI agent implementation, consider these guidelines:\n\n1. Test exports behave identically to standard exports except for the record limit\n    - All filters, pagination, and processing logic remain intact\n    - Only the total output volume is artificially capped\n\n2. Test exports do not store state between runs\n    - Unlike delta exports, each test export starts fresh\n    - No need to reset any state when transitioning from test to production\n\n3. Common implementation scenarios:\n    - During initial integration development\n    - When diagnosing data format or transformation issues\n    - When performance testing with controlled data volumes\n    - For demonstrations or proof-of-concept implementations\n\n4. Transitioning to production:\n    - Simply change type from \"test\" to null/undefined for standard exports\n    - Change type from \"test\" to \"delta\" for incremental exports\n    - No other configuration changes are typically needed\n","properties":{"limit":{"type":"integer","description":"Specifies the maximum number of records to process in a single test export run.\n\n**Field behavior**\n\nThis field controls the data volume during test executions:\n\n- REQUIRED when the export's type field is set to \"test\"\n- Accepts integer values between 1 and 100\n- Enforced by the system regardless of pagination settings\n- Applies to top-level records (before oneToMany processing)\n\n**Implementation considerations**\n\n**Balance between volume and usefulness**\n\nThe ideal limit depends on your testing objectives:\n\n- 1-5 records: Good for initial implementation and format verification\n- 10-25 records: Useful for testing transformation logic and identifying edge cases\n- 50-100 records: Better for performance testing and data pattern analysis\n\n**System enforced maximum**\n\n```\n\"limit\": 100  // Maximum allowed value\n```\n\nThe system enforces a hard limit of 100 records for all test exports to prevent\naccidental processing of large datasets during development.\n\n**Relationship with pageSize**\n\nThe test limit overrides but does not replace the export's pageSize:\n\n- If limit < pageSize: Only one page is processed with limit records\n- If limit > pageSize: Multiple pages are processed until limit is reached\n- Either way, the total records processed will not exceed the limit value\n\nIMPORTANT: When transitioning from test to production, you don't need to remove\nthis configuration - simply change the export's type field to remove the test limit.\n","minimum":1,"maximum":100}}},"Once":{"type":"object","description":"Configuration object for flag-based exports that process records exactly once.\n\nThis object is REQUIRED when the export's type field is set to \"once\" and should not be\nincluded for other export types. Once exports use a boolean/checkbox field in the source system\nto track which records have been processed, creating a reliable idempotent data extraction pattern.\n\nFor optimal AI agent implementation, consider these guidelines:\n\n1. System behavior during execution:\n    - First, the export retrieves all records where the specified boolean field is false\n    - After successfully processing these records, the system automatically sets the field to true\n    - On subsequent runs, previously processed records are excluded\n\n2. Prerequisites in the source system:\n    - The source must have a boolean/checkbox field that can be used as a processing flag\n    - Your connection must have write access to update this field after export\n    - The field should be indexed for optimal performance\n\n3. Common implementation scenarios:\n    - One-time migrations where data should not be duplicated\n    - Processing queues where records are marked as \"processed\"\n    - Compliance scenarios requiring audit trails of exported records\n    - Implementing exactly-once delivery semantics\n\n4. Error handling behavior:\n    - If the export fails, the boolean fields remain unchanged\n    - Records will be retried on the next run\n    - No manual intervention is required for recovery\n","properties":{"booleanField":{"type":"string","description":"Specifies the API field name of the boolean/checkbox that tracks processed records.\n\n**Field behavior**\n\nThis field identifies which boolean field in the source system controls the export filtering:\n\n- REQUIRED when the export's type field is set to \"once\"\n- Must reference a valid boolean/checkbox field in the source system\n- Must be writeable by the connection's authentication credentials\n- The system performs two operations with this field:\n  1. Filters to only include records where this field is false\n  2. Updates processed records by setting this field to true\n\n**Implementation patterns**\n\n**Using dedicated tracking fields**\n```\n\"booleanField\": \"isExported\"\n```\n- Create a dedicated field specifically for integration tracking\n- Provides clear separation between business and integration logic\n- Most maintainable approach for long-term operations\n\n**Using existing status fields**\n```\n\"booleanField\": \"isProcessed\"\n```\n- Leverage existing status fields if they align with your integration needs\n- Ensure the field's meaning is compatible with your integration logic\n- Consider potential conflicts with other processes using the same field\n\n**Targeted export tracking**\n```\n\"booleanField\": \"exported_to_netsuite\"\n```\n- For systems synchronizing to multiple destinations\n- Create separate tracking fields for each destination system\n- Enables independent control of different export processes\n\n**Technical considerations**\n\n- Field updates happen in batches after each successful page of records is processed\n- The field update uses the same connection as the export operation\n- For optimal performance, the boolean field should be indexed in the source database\n- Boolean values of 0/1, true/false, and yes/no are all properly interpreted\n\nIMPORTANT: Ensure the field is not being updated by other processes, as this could\ncause records to be skipped unexpectedly. If multiple processes need to track exports,\nuse separate boolean fields for each process.\n"}}},"Webhook":{"type":"object","description":"Configuration object for real-time event listeners that receive data via incoming HTTP requests.\n\nThis object is REQUIRED when the export's type field is set to \"webhook\" and should not be\nincluded for other export types. Webhook exports create dedicated HTTP endpoints that can receive\ndata from external systems in real-time, enabling event-driven integration architectures.\n\nWhen configured, the system:\n1. Creates a unique URL endpoint for receiving HTTP requests\n2. Validates incoming requests based on your security configuration\n3. Processes the payload and passes it to subsequent flow steps\n4. Returns a configurable HTTP response to the caller\n\nFor optimal AI agent implementation, consider these guidelines:\n\n**Webhook security models**\n\nWebhooks support multiple security verification methods, each requiring different fields:\n\n1. **HMAC Verification** (Most secure, recommended for production)\n    - Required fields: verify=\"hmac\", key, algorithm, encoding, header\n    - Verifies a cryptographic signature included with each request\n    - Ensures data integrity and authenticity\n\n2. **Token Verification** (Simple shared secret)\n    - Required fields: verify=\"token\", token, path\n    - Checks for a specific token value in the request\n    - Simpler but less secure than HMAC\n\n3. **Basic Authentication** (HTTP standard)\n    - Required fields: verify=\"basic\", username, password\n    - Uses HTTP Basic Authentication headers\n    - Compatible with most HTTP clients\n\n4. **Secret URL** (Simplest but least secure)\n    - Required fields: verify=\"secret_url\", token\n    - Relies solely on URL obscurity for security\n    - The token is embedded in the webhook URL to create a unique, hard-to-guess endpoint\n    - Suitable only for non-sensitive data or testing\n\n5. **Public Key** (Advanced, for specific providers)\n    - Required fields: verify=\"publickey\", key\n    - Uses public key cryptography for verification\n    - Only available for certain providers\n\n**Response customization**\n\nYou can customize how the webhook responds to callers with these field groups:\n\n1. **Standard Success Response**\n    - Fields: successStatusCode, successBody, successMediaType, successResponseHeaders\n    - Controls how the webhook responds to valid requests\n\n2. **Challenge Response** (For subscription verification)\n    - Fields: challengeSuccessBody, challengeSuccessStatusCode, challengeSuccessMediaType, challengeResponseHeaders\n    - Controls how the webhook responds to verification/challenge requests\n\n**Implementation scenarios**\n\nWebhooks are commonly used for:\n\n1. **Real-time data synchronization**\n    - E-commerce platforms sending order notifications\n    - CRM systems delivering contact updates\n    - Payment processors reporting transaction events\n\n2. **Event-driven processes**\n    - Triggering fulfillment when orders are placed\n    - Initiating approval workflows on document submissions\n    - Executing business logic when status changes occur\n\n3. **System integration**\n    - Connecting SaaS applications without polling\n    - Building composite applications from microservices\n    - Creating fan-out architectures for event distribution\n","properties":{"provider":{"type":"string","description":"Specifies the source application sending webhook data, enabling platform-specific optimizations.\n\n**Field behavior**\n\nThis field determines how the webhook handles incoming requests:\n\n- OPTIONAL: Defaults to \"custom\" if not specified\n- When a specific provider is selected, the system:\n  1. Pre-configures appropriate security settings for that platform\n  2. Applies platform-specific payload parsing rules\n  3. May enable additional features only relevant to that provider\n\n**Implementation guidance**\n\n**Provider-specific configurations**\n\nWhen you know the exact source system, select its specific provider:\n```\n\"provider\": \"shopify\"\n```\n- Automatically configures proper HMAC verification settings\n- Optimizes payload parsing for Shopify's webhook format\n- May enable additional Shopify-specific features\n\n**Custom configuration**\n\nFor generic webhooks or unlisted providers, use custom:\n```\n\"provider\": \"custom\"\n```\n- Requires manual configuration of all security settings\n- Maximum flexibility for handling any webhook format\n- Recommended for custom applications or newer platforms\n\n**Selection criteria**\n\nChoose a specific provider when:\n- The source system is explicitly listed in the enum values\n- You want to leverage pre-configured settings\n- The integration must follow platform-specific practices\n\nChoose \"custom\" when:\n- The source system is not listed\n- You need full control over webhook configuration\n- You're building a custom interface or protocol\n\nIMPORTANT: Some providers enforce specific security methods. When selecting a\nprovider, ensure you have the necessary security credentials (tokens, keys, etc.)\nas required by that platform.\n","enum":["github","shopify","travis","travis-org","slack","dropbox","onfleet","helpscout","errorception","box","stripe","aha","jira","pagerduty","postmark","mailchimp","intercom","activecampaign","segment","recurly","shipwire","surveymonkey","parseur","mailparser-io","hubspot","integrator-extension","custom","sapariba","happyreturns","typeform"]},"verify":{"type":"string","description":"Defines the security verification method used to authenticate incoming webhook requests.\n\n**Field behavior**\n\nThis field is the primary control for webhook security:\n\n- REQUIRED for all webhook exports\n- Determines which additional security fields must be configured\n- Controls how incoming requests are validated before processing\n\n**Verification methods**\n\n**Hmac Verification**\n```\n\"verify\": \"hmac\"\n```\n- Most secure method, cryptographically verifies request integrity\n- REQUIRES: key, algorithm, encoding, header fields\n- Validates a cryptographic signature included in the request header\n- Works well with providers that support HMAC (Shopify, Stripe, GitHub, etc.)\n\n**Token Verification**\n```\n\"verify\": \"token\"\n```\n- Simple verification using a shared secret token\n- REQUIRES: token, path fields\n- Checks for a specific token value in the request body or query params\n- Good for simple scenarios with trusted networks\n\n**Basic Authentication**\n```\n\"verify\": \"basic\"\n```\n- Standard HTTP Basic Authentication\n- REQUIRES: username, password fields\n- Validates credentials sent in the Authorization header\n- Compatible with most HTTP clients and tools\n\n**Public Key**\n```\n\"verify\": \"publickey\"\n```\n- Advanced verification using public key cryptography\n- REQUIRES: key field (containing the public key)\n- Only available for certain providers that use asymmetric cryptography\n- Highest security level but more complex to configure\n\n**Secret url**\n```\n\"verify\": \"secret_url\"\n```\n- Simplest method, relies solely on the obscurity of the URL\n- REQUIRES: token field (the token is embedded in the webhook URL to create a unique, hard-to-guess endpoint)\n- Only suitable for non-sensitive data or testing environments\n- Not recommended for production use with sensitive data\n\nIMPORTANT: Choose the security method that matches your source system's capabilities.\nIf the source system supports multiple verification methods, HMAC is generally the\nmost secure option.\n","enum":["token","hmac","basic","publickey","secret_url"]},"token":{"type":"string","description":"Specifies the shared secret token value used to verify incoming webhook requests.\n\n**Field behavior**\n\nThis field defines the expected token value:\n\n- REQUIRED when verify=\"token\" or verify=\"secret_url\"\n- When verify=\"token\": must be a string that exactly matches what the sender will provide. Used with the path field to locate and validate the token in the request.\n- When verify=\"secret_url\": the token is embedded in the webhook URL to create a unique, hard-to-guess endpoint. Generate a random, high-entropy value.\n- Case-sensitive and whitespace-sensitive\n\n**Implementation guidance**\n\nThe token verification flow works as follows:\n1. The webhook receives an incoming request\n2. The system looks for the token at the location specified by the path field\n3. If the found value exactly matches this token value, the request is processed\n4. If no match is found, the request is rejected with a 401 error\n\n**Security best practices**\n\nFor maximum security:\n- Use a random, high-entropy token (32+ characters)\n- Include a mix of uppercase, lowercase, numbers, and special characters\n- Don't use predictable values like company names or common words\n- Rotate tokens periodically for sensitive integrations\n\n**Common implementations**\n\n```\n\"token\": \"3a7c4f8b2e9d1a5c6b3e7d9f2a1c5b8e\"\n```\n\n```\n\"token\": \"whsec_8fb2e91a5c6b3e7d9f2a1c5b8e3a7c4f\"\n```\n\nIMPORTANT: Never share this token in public repositories or documentation.\nTreat it as a sensitive credential similar to a password.\n"},"algorithm":{"type":"string","description":"Specifies the cryptographic hashing algorithm used for HMAC signature verification.\n\n**Field behavior**\n\nThis field determines how signatures are validated:\n\n- REQUIRED when verify=\"hmac\"\n- Must match the algorithm used by the webhook sender\n- Affects security strength and compatibility\n\n**Algorithm selection**\n\n**Sha-256 (Recommended)**\n```\n\"algorithm\": \"sha256\"\n```\n- Modern, secure hash algorithm\n- Industry standard for most new webhook implementations\n- Preferred choice for all new integrations\n- Used by Shopify, Stripe, and many modern platforms\n\n**Sha-1 (Legacy)**\n```\n\"algorithm\": \"sha1\"\n```\n- Older, less secure algorithm\n- Still used by some legacy systems\n- Only select if the provider explicitly requires it\n- GitHub webhooks used this historically\n\n**Sha-384/sha-512 (High Security)**\n```\n\"algorithm\": \"sha384\"\n\"algorithm\": \"sha512\"\n```\n- Higher security variants with longer digests\n- Use when specified by the provider or for sensitive data\n- Less common but supported by some security-focused systems\n\nIMPORTANT: This MUST match the algorithm used by the webhook sender.\nMismatched algorithms will cause all webhook requests to be rejected.\n","enum":["sha1","sha256","sha384","sha512"]},"encoding":{"type":"string","description":"Specifies the encoding format used for the HMAC signature in webhook requests.\n\n**Field behavior**\n\nThis field determines how signature values are encoded:\n\n- REQUIRED when verify=\"hmac\"\n- Must match the encoding used by the webhook sender\n- Affects how binary signature values are represented as strings\n\n**Encoding options**\n\n**Hexadecimal (hex)**\n```\n\"encoding\": \"hex\"\n```\n- Represents the signature as a string of hexadecimal characters (0-9, a-f)\n- Most common encoding for web-based systems\n- Used by many platforms including Stripe and some Shopify implementations\n- Example output: \"8f7d56a32e1c9b47d882e3aa91341f64\"\n\n**Base64**\n```\n\"encoding\": \"base64\"\n```\n- Represents the signature using base64 encoding\n- More compact than hex (about 33% shorter)\n- Used by platforms like Shopify (newer implementations) and some GitHub scenarios\n- Example output: \"j31WozbhtrHYeC46qRNB9k==\"\n\nIMPORTANT: This MUST match the encoding used by the webhook sender.\nMismatched encoding will cause all webhook requests to be rejected even if\nthe signature is mathematically correct.\n","enum":["hex","base64"]},"key":{"type":"string","description":"Specifies the secret key used to verify cryptographic signatures in incoming webhooks.\n\n**Field behavior**\n\nThis field provides the shared secret for signature verification:\n\n- REQUIRED when verify=\"hmac\" or verify=\"publickey\"\n- Contains the secret value known to both sender and receiver\n- Used with the incoming payload to validate the signature\n- Highly sensitive security credential\n\n**Implementation guidance**\n\n**For hmac verification**\n\nThe key is used in the following verification process:\n1. The webhook receives an incoming request with a signature\n2. The system computes an HMAC of the request body using this key and the specified algorithm\n3. This computed signature is compared with the signature from the request header\n4. If they match exactly, the request is authenticated and processed\n\n**Security best practices**\n\nFor maximum security:\n- Use a random, high-entropy key (32+ characters)\n- Include a mix of characters and avoid dictionary words\n- Never share this key in code repositories or logs\n- Rotate keys periodically for sensitive integrations\n- Use environment variables or secure credential storage\n\n**Common implementations**\n\n```\n\"key\": \"whsec_3a7c4f8b2e9d1a5c6b3e7d9f2a1c5b8e3a7c4f8b\"\n```\n\n```\n\"key\": \"sk_test_51LZIr9B9Y6YIwSKx8647589JKhdjs889KJsk389\"\n```\n\nIMPORTANT: This key should be treated as a highly sensitive credential,\nsimilar to a private key or password. It should never be exposed publicly\nor logged in application logs.\n"},"header":{"type":"string","description":"Specifies the HTTP header name that contains the signature for HMAC verification.\n\n**Field behavior**\n\nThis field identifies where to find the signature in incoming requests:\n\n- REQUIRED when verify=\"hmac\"\n- Must exactly match the header name used by the webhook sender\n- Case-insensitive (HTTP headers are not case-sensitive)\n\n**Common header patterns**\n\n**Platform-specific headers**\n\nMany platforms use standardized header names for their signatures:\n\n```\n\"header\": \"X-Shopify-Hmac-SHA256\"  // For Shopify webhooks\n```\n\n```\n\"header\": \"X-Hub-Signature-256\"    // For GitHub webhooks\n```\n\n```\n\"header\": \"Stripe-Signature\"        // For Stripe webhooks\n```\n\n**Generic signature headers**\n\nFor custom implementations or less common platforms:\n\n```\n\"header\": \"X-Webhook-Signature\"     // Common generic format\n```\n\n```\n\"header\": \"X-Signature\"             // Simplified format\n```\n\n**Implementation notes**\n\n- The system will look for this exact header name in incoming requests\n- If the header is not found, the request will be rejected with a 401 error\n- Some platforms may include a prefix in the header value (e.g., \"sha256=\")\n  which is handled automatically by the system\n\nIMPORTANT: This must exactly match the header name used by the webhook sender.\nIf you're unsure about the correct header name, consult the sender's documentation\nor use a tool like cURL with verbose output to inspect an example request.\n"},"path":{"type":"string","description":"Specifies the location of the verification token in incoming webhook requests.\n\n**Field behavior**\n\nThis field determines where to find the token for verification:\n\n- REQUIRED when verify=\"token\"\n- Defines a JSON path to locate the token in the request body\n- For query parameters, use the appropriate path format (typically at root level)\n\n**Implementation patterns**\n\n**Token in request body**\n\nFor tokens embedded in JSON payloads:\n\n```\n\"path\": \"meta.token\"        // For { \"meta\": { \"token\": \"xyz123\" } }\n```\n\n```\n\"path\": \"verification.key\"  // For { \"verification\": { \"key\": \"xyz123\" } }\n```\n\n**Token at root level**\n\nFor tokens in the top level of the request:\n\n```\n\"path\": \"token\"             // For { \"token\": \"xyz123\", \"data\": {...} }\n```\n\n**Token in query parameters**\n\nFor tokens sent as URL query parameters, use the parameter name:\n\n```\n\"path\": \"verify_token\"      // For /webhook?verify_token=xyz123\n```\n\n**Verification process**\n\n1. The webhook receives an incoming request\n2. The system uses this path to extract the token value\n3. The extracted value is compared with the configured token\n4. If they match exactly, the request is processed\n\nIMPORTANT: The path is case-sensitive and must exactly match the structure\nof incoming requests. For query parameters, the system automatically checks\nboth the body and query string using the provided path.\n"},"username":{"type":"string","description":"Specifies the username for webhook HTTP Basic Authentication security.\n\n**Field behavior**\n\nThis field defines one half of the Basic Authentication credentials:\n\n- REQUIRED when verify=\"basic\"\n- Used in conjunction with the password field\n- Case-sensitive string value\n- Encoded in the standard HTTP Basic Authentication format\n\n**Implementation notes**\n\nWhen Basic Authentication is used, the webhook requires incoming requests to include\nan Authorization header containing \"Basic \" followed by a base64-encoded string of\n\"username:password\".\n\nExample header:\n```\nAuthorization: Basic d2ViaG9va191c2VyOndlYmhvb2tfcGFzc3dvcmQ=\n```\n\nWhere \"d2ViaG9va191c2VyOndlYmhvb2tfcGFzc3dvcmQ=\" is the base64 encoding of\n\"webhook_user:webhook_password\".\n\n**Security considerations**\n\nBasic Authentication:\n- Is widely supported by HTTP clients and servers\n- Should ONLY be used over HTTPS to prevent credential interception\n- Provides a simple authentication mechanism but without integrity verification\n- Is less secure than HMAC verification for webhook scenarios\n\nIMPORTANT: Always use strong, unique credentials rather than generic or easily\nguessable values. Basic Authentication is less secure than HMAC for webhooks\nbut can be appropriate for simple scenarios or when working with systems that\ndon't support more advanced verification methods.\n"},"password":{"type":"string","description":"Specifies the password for webhook HTTP Basic Authentication security.\n\n**Field behavior**\n\nThis field defines the second half of the Basic Authentication credentials:\n\n- REQUIRED when verify=\"basic\"\n- Used in conjunction with the username field\n- Case-sensitive string value\n- Encoded in the standard HTTP Basic Authentication format\n\n**Implementation notes**\n\nThis password is combined with the username and encoded in base64 format for\nthe HTTP Authorization header. The webhook verifies that incoming requests contain\nthe correct encoded credentials before processing them.\n\n**Security best practices**\n\nFor maximum security:\n- Use a strong, randomly generated password (16+ characters)\n- Include a mix of uppercase, lowercase, numbers, and special characters\n- Don't reuse passwords from other systems\n- Avoid dictionary words or predictable patterns\n- Rotate passwords periodically for sensitive integrations\n\nIMPORTANT: This password should be treated as a sensitive credential.\nNever share it in public repositories, documentation, or logs. Always use\nHTTPS for webhooks using Basic Authentication to prevent credential interception.\n"},"successStatusCode":{"type":"integer","description":"Specifies the HTTP status code sent back to webhook callers after successful processing.\n\n**Field behavior**\n\nThis field controls the HTTP response status code:\n\n- OPTIONAL: Defaults to 204 (No Content) if not specified\n- Affects how webhook callers interpret the success response\n- Must be a valid HTTP status code in the 2xx range\n\n**Common status codes**\n\n**204 No Content (Default)**\n```\n\"successStatusCode\": 204\n```\n- Returns no response body\n- Most efficient option as it minimizes response size\n- Appropriate when the caller doesn't need confirmation details\n- Automatically disables successBody (even if specified)\n\n**200 ok**\n```\n\"successStatusCode\": 200\n```\n- Standard success response\n- Allows returning a response body with details\n- Most widely used and recognized success code\n- Compatible with all HTTP clients\n\n**202 Accepted**\n```\n\"successStatusCode\": 202\n```\n- Indicates request was accepted for processing but may not be complete\n- Appropriate for asynchronous processing scenarios\n- Signals that the webhook was received but full processing is pending\n\n**Implementation considerations**\n\nThe appropriate status code depends on your webhook caller's expectations:\n\n- Some systems require specific status codes to consider the delivery successful\n- If the caller retries on anything other than 2xx, use 200 or 202\n- If the caller needs confirmation details, use 200 with a response body\n- If efficiency is paramount, use 204 (default)\n\nIMPORTANT: When using 204 No Content, any successBody configuration will be ignored\nas this status code specifically indicates no response body is being returned.\n","default":204},"successBody":{"type":"string","description":"Specifies the HTTP response body sent back to webhook callers after successful processing.\n\n**Field behavior**\n\nThis field controls the content returned to the webhook caller:\n\n- OPTIONAL: Defaults to empty (no body) if not specified\n- Ignored when successStatusCode is 204 (No Content)\n- Content type is determined by the successMediaType field\n- Can contain static text or structured data (JSON, XML)\n\n**Implementation patterns**\n\n**Simple acknowledgment**\n```\n\"successBody\": \"OK\"\n```\n- Minimal plaintext response\n- Confirms receipt without details\n- Most efficient for basic acknowledgment\n\n**Structured response (JSON)**\n```\n\"successBody\": \"{\\\"success\\\":true,\\\"message\\\":\\\"Webhook received\\\"}\"\n```\n- Provides structured data about the result\n- Can include more detailed status information\n- Compatible with programmatic processing by the caller\n- Remember to escape quotes in JSON strings\n\n**Custom confirmation**\n```\n\"successBody\": \"{\\\"status\\\":\\\"received\\\",\\\"timestamp\\\":\\\"{{currentDateTime}}\\\"}\"\n```\n- Can include dynamic values using handlebars templates\n- Useful for providing receipt confirmation with metadata\n\n**Response flow**\n\nThe response body is sent after the webhook payload has been:\n1. Received and authenticated\n2. Validated against any configured requirements\n3. Accepted for processing by the system\n\nIMPORTANT: The successBody will only be returned if successStatusCode is NOT 204.\nIf you want to return a body, make sure to set successStatusCode to 200, 201, or 202.\n"},"successMediaType":{"type":"string","description":"Specifies the Content-Type header for successful webhook responses.\n\n**Field behavior**\n\nThis field controls how the response body is interpreted:\n\n- OPTIONAL: Defaults to \"json\" if not specified\n- Only relevant when returning a successBody and not using status code 204\n- Determines the Content-Type header in the HTTP response\n- Must be consistent with the actual format of the successBody\n\n**Media type options**\n\n**Json (Default)**\n```\n\"successMediaType\": \"json\"\n```\n- Sets Content-Type: application/json\n- Use when successBody contains valid JSON\n- Most common for API responses\n- Allows structured data that clients can parse programmatically\n\n**Xml**\n```\n\"successMediaType\": \"xml\"\n```\n- Sets Content-Type: application/xml\n- Use when successBody contains valid XML\n- Necessary for systems expecting XML responses\n- Less common in modern APIs but still used in some enterprise systems\n\n**Plain Text**\n```\n\"successMediaType\": \"plaintext\"\n```\n- Sets Content-Type: text/plain\n- Use for simple string responses\n- Most compatible option for basic acknowledgments\n- Appropriate when successBody is just \"OK\" or similar\n\n**Implementation considerations**\n\n- The media type must match the actual content format in successBody\n- If returning JSON in successBody, use \"json\" (most common)\n- If returning a simple text acknowledgment, use \"plaintext\"\n- If the caller specifically requires XML, use \"xml\"\n\nIMPORTANT: When successStatusCode is 204 (No Content), this field has no effect\nsince no body is returned, and therefore no Content-Type is needed.\n","default":"json","enum":["json","xml","plaintext"]},"successResponseHeaders":{"type":"array","description":"Defines custom HTTP headers to include in successful webhook responses.\n\n**Field behavior**\n\nThis field allows additional HTTP headers in the response:\n\n- OPTIONAL: If omitted, only standard headers are included\n- Each entry defines a name/value pair for a single header\n- Applied to all successful responses (regardless of status code)\n- Can override standard headers like Content-Type\n\n**Implementation patterns**\n\n**Standard use cases**\n\nCustom headers are useful for:\n- Providing metadata about the response\n- Enabling CORS for browser-based webhook callers\n- Including tracking or correlation IDs\n- Adding custom security headers\n\n**Common header examples**\n\nCORS support:\n```json\n[\n  {\"name\": \"Access-Control-Allow-Origin\", \"value\": \"*\"},\n  {\"name\": \"Access-Control-Allow-Methods\", \"value\": \"POST, OPTIONS\"}\n]\n```\n\nRequest tracking:\n```json\n[\n  {\"name\": \"X-Request-ID\", \"value\": \"{{jobId}}\"},\n  {\"name\": \"X-Webhook-Received\", \"value\": \"{{currentDateTime}}\"}\n]\n```\n\nCustom application headers:\n```json\n[\n  {\"name\": \"X-API-Version\", \"value\": \"1.0\"},\n  {\"name\": \"X-Processing-Status\", \"value\": \"accepted\"}\n]\n```\n\n**Technical details**\n\n- Header names are case-insensitive as per HTTP specification\n- Some headers like Content-Type can be set via other fields (successMediaType)\n- Headers defined here take precedence over automatically set headers\n- The values can contain handlebars expressions for dynamic content\n\nIMPORTANT: Be careful when setting security-related headers like\nAccess-Control-Allow-Origin, as improper values could create security vulnerabilities.\n","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}}},"challengeResponseHeaders":{"type":"array","description":"Defines custom HTTP headers to include in webhook challenge responses.\n\n**Field behavior**\n\nThis field configures headers for subscription verification:\n\n- OPTIONAL: If omitted, only standard headers are included\n- Only used for webhook verification/challenge requests\n- Each entry defines a name/value pair for a single header\n- Particularly important for platforms requiring specific verification headers\n\n**Challenge verification context**\n\nMany webhook providers implement a verification process:\n1. Before sending real events, they send a \"challenge\" request\n2. The webhook must respond with specific headers and/or body content\n3. Only after successful verification will real webhook events be sent\n\nThis field allows customizing the headers sent during this verification step.\n\n**Common patterns by platform**\n\n**Facebook/Instagram**\n```json\n[\n  {\"name\": \"Content-Type\", \"value\": \"text/plain\"}\n]\n```\n\n**Slack**\n```json\n[\n  {\"name\": \"Content-Type\", \"value\": \"application/json\"}\n]\n```\n\n**Custom implementations**\n```json\n[\n  {\"name\": \"X-Challenge-Response\", \"value\": \"passed\"},\n  {\"name\": \"X-Verification-Status\", \"value\": \"success\"}\n]\n```\n\nIMPORTANT: The specific headers required vary by platform. Consult the webhook\nprovider's documentation for the exact verification requirements. Incorrect challenge\nresponse headers may prevent successful webhook subscription.\n","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}}},"challengeSuccessBody":{"type":"string","description":"Specifies the HTTP response body for webhook challenge/verification requests.\n\n**Field behavior**\n\nThis field defines the verification response content:\n\n- OPTIONAL: If omitted, a default empty response is sent\n- Only used for webhook subscription verification requests\n- Content type is determined by the challengeSuccessMediaType field\n- Often needs to contain specific values expected by the webhook provider\n\n**Verification patterns by platform**\n\nDifferent webhook providers implement different verification mechanisms:\n\n**Facebook/Instagram**\n                \"challengeSuccessBody\": \"{{hub.challenge}}\"\n```\n- Must echo back the challenge value sent in the request\n- Uses handlebars expression to access the challenge parameter\n\n**Slack**\n```\n\"challengeSuccessBody\": \"{\\\"challenge\\\":\\\"{{challenge}}\\\"}\"\n```\n- Returns the challenge value in a JSON structure\n- Required for Slack's Events API verification\n\n**Generic challenge-response**\n```\n\"challengeSuccessBody\": \"{\\\"verified\\\":true,\\\"timestamp\\\":\\\"{{currentDateTime}}\\\"}\"\n```\n- Simple confirmation response for custom implementations\n- Can include additional metadata as needed\n\n**Implementation considerations**\n\n- The exact format is dictated by the webhook provider's requirements\n- Some platforms require echoing back specific request parameters\n- Others require a structured response with specific fields\n- Handlebars expressions ({{variable}}) can access request data\n\nIMPORTANT: Incorrect challenge responses will prevent webhook subscription verification.\nAlways consult the webhook provider's documentation for exact requirements.\n"},"challengeSuccessStatusCode":{"type":"integer","description":"Specifies the HTTP status code for webhook challenge/verification responses.\n\n**Field behavior**\n\nThis field controls the verification response status:\n\n- OPTIONAL: Defaults to 200 (OK) if not specified\n- Only used for webhook subscription verification requests\n- Must match what the webhook provider expects for successful verification\n- Most platforms require a 200 OK response\n\n**Common status codes for verification**\n\n**200 ok (Default)**\n```\n\"challengeSuccessStatusCode\": 200\n```\n- Standard success response\n- Most webhook platforms expect this status code\n- Generally the safest option for verification\n\n**201 Created**\n```\n\"challengeSuccessStatusCode\": 201\n```\n- Used by some systems to indicate subscription was created\n- Less common for verification but used in some custom implementations\n\n**Platform-specific requirements**\n\nMost major webhook providers require specific status codes:\n\n- Facebook/Instagram: 200\n- Slack: 200\n- GitHub: 200\n- Shopify: 200\n- Stripe: 200\n\nIMPORTANT: Using the wrong status code will cause the verification to fail.\nIf you're unsure, keep the default 200 status code, as it's the most widely\naccepted for webhook verifications.\n","default":200},"challengeSuccessMediaType":{"type":"string","description":"Specifies the Content-Type header for webhook challenge/verification responses.\n\n**Field behavior**\n\nThis field controls the challenge response format:\n\n- OPTIONAL: Defaults to \"json\" if not specified\n- Only used for webhook subscription verification requests\n- Determines the Content-Type header in the verification response\n- Must match the format of the challengeSuccessBody content\n\n**Common media types for verification**\n\n**Json (Default)**\n```\n\"challengeSuccessMediaType\": \"json\"\n```\n- Sets Content-Type: application/json\n- Required by Slack and many modern webhook providers\n- Use when returning structured verification data\n\n**Plain Text**\n```\n\"challengeSuccessMediaType\": \"plaintext\"\n```\n- Sets Content-Type: text/plain\n- Required by Facebook/Instagram webhook verification\n- Use when the challenge response is a simple string\n\n**Xml**\n```\n\"challengeSuccessMediaType\": \"xml\"\n```\n- Sets Content-Type: application/xml\n- Less common but used by some enterprise systems\n- Use only when the webhook provider specifically requires XML\n\n**Platform-specific requirements**\n\n- Facebook/Instagram: plaintext (when echoing hub.challenge)\n- Slack: json (for Events API verification)\n- Most modern APIs: json\n\nIMPORTANT: The media type must match both the format of your challengeSuccessBody\nand the requirements of the webhook provider. Mismatched content types can cause\nverification to fail even if the response body is correct.\n","default":"json","enum":["json","xml","plaintext"]}}},"Distributed":{"type":"object","description":"Configuration object for distributed exports that require authentication.\n\nThis object contains authentication credentials needed for distributed processing.\n","properties":{"bearerToken":{"type":"string","description":"Bearer token for authenticating distributed export requests.\n\n**Field behavior**\n\nThis token provides authentication for the distributed export:\n\n- Required for secure access to distributed endpoints\n- Must be a valid bearer token format\n- Used in Authorization header as \"Bearer {token}\"\n- Should be kept secure and rotated regularly\n\n**Implementation guidance**\n\n**Token management**\n\n- Store tokens securely (encrypted at rest)\n- Implement token rotation policies\n- Monitor token expiration dates\n- Use environment variables for token storage\n\n**Security considerations**\n\n- Never log bearer tokens in plain text\n- Implement proper access controls\n- Use HTTPS for all token transmissions\n- Validate tokens on each request\n\nIMPORTANT: Bearer tokens provide full access to the distributed export.\nTreat them as sensitive credentials.\n","format":"password"}},"required":["bearerToken"],"additionalProperties":false},"FileSystem":{"type":"object","description":"Configuration for FileSystem exports","properties":{"directoryPath":{"type":"string","description":"Directory path to retrieve files from (required)"}},"required":["directoryPath"]},"File":{"type":"object","description":"Configuration for file processing in exports. This object defines how files are parsed,\nfiltered, and processed across all file-based export operations within Celigo.\n\n**Export contexts**\n\nThis schema applies to multiple file-based export scenarios:\n\n1. **Source System Types**:\n    - Simple exports with file uploads through the UI\n    - HTTP exports retrieving files from web sources\n    - FTP/SFTP exports downloading files from servers\n    - Amazon S3\n    - Azure Blob Storage\n    - Google Cloud Storage\n    - And other file-based source systems\n\n**Implementation guidelines**\n\nAI agents should consider these key decision points when configuring file processing for exports:\n\n1. **File Format Selection**: Set the `type` field to match the format of the files being processed\n    (csv, json, xlsx, xml). This determines which format-specific configuration object to populate.\n\n2. **Processing Mode**: Set the `output` field based on whether you need to:\n    - Parse file contents into records (`\"records\"`)\n    - Transfer files without parsing (`\"blobKeys\"`)\n    - Only retrieve metadata about files (`\"metadata\"`)\n\n3. **File Filtering**: Use the `filter` object to selectively process files based on criteria\n    like file names, sizes, or custom logic.\n\n4. **Format-Specific Configuration**: Configure the corresponding object (csv, json, xlsx, xml)\n    based on the selected file type.\n\n**Field dependencies**\n\n- When `type` = \"csv\", configure the `csv` object\n- When `type` = \"json\", configure the `json` object\n- When `type` = \"xlsx\", configure the `xlsx` object\n- When `type` = \"xml\", configure the `xml` object\n- When `type` = \"filedefinition\", configure the `fileDefinition` object\n\n**Export-specific considerations**\n\nWhile the file processing configuration remains consistent, different export types may have\nadditional requirements:\n\n- **HTTP Exports**: May need authentication and specific endpoint configurations\n- **FTP/SFTP Exports**: Require server credentials and path information\n- **Cloud Storage Exports**: Need bucket/container details and access credentials\n\nThe File schema focuses specifically on how files are processed once they are\nretrieved from the source system, regardless of which export type is used.\n","properties":{"encoding":{"type":"string","description":"Character encoding used for reading and parsing file content. This setting is critical for ensuring proper character interpretation, especially for international data and special characters.\n\n**Encoding options and usage guidance**\n\n**Utf-8 (`\"utf8\"`)**\n- **Default Setting**: Used if no encoding is specified\n- **Best For**: Modern text files, international character sets, XML/JSON files\n- **Compatibility**: Universally supported; standard for web applications\n- **When to Use**: First choice for most new integrations; handles most languages\n\n**Windows-1252 (`\"win1252\"`)**\n- **Best For**: Legacy Windows system files, older Western European data\n- **Compatibility**: Common in Windows-based exports, especially older systems\n- **When to Use**: When files originate from older Windows systems or contain certain special characters not rendering properly in utf8\n\n**Utf-16le (`\"utf-16le\"`)**\n- **Best For**: Unicode text with extensive character requirements\n- **Compatibility**: Microsoft Word documents, some database exports\n- **When to Use**: When files have Byte Order Mark (BOM) or are known to be 16-bit Unicode\n\n**Gb18030 (`\"gb18030\"`)**\n- **Best For**: Chinese character sets\n- **Compatibility**: Official character set standard for China\n- **When to Use**: For files containing simplified or traditional Chinese characters\n\n**Mac Roman (`\"macroman\"`)**\n- **Best For**: Legacy Mac system files (pre-OS X)\n- **Compatibility**: Older Apple systems and applications\n- **When to Use**: For older files created on Apple systems\n\n**Iso-8859-1 (`\"iso88591\"`)**\n- **Best For**: Western European languages\n- **Compatibility**: Widely supported in older systems\n- **When to Use**: For legacy European language content\n\n**Shift jis (`\"shiftjis\"`)**\n- **Best For**: Japanese character sets\n- **Compatibility**: Common in Japanese Windows and older systems\n- **When to Use**: For files containing Japanese text\n\n**Implementation guidance for ai agents**\n\n1. **Detection Strategy**: If encoding is unknown, first try utf8 (default), then try win1252 for Western language files with errors\n\n2. **Encoding Selection Process**:\n    - Check source system documentation for encoding specifications\n    - For files with corrupt/missing characters, test alternative encodings\n    - Consider geographic origin of data (Asian languages often require specific encodings)\n\n3. **Common Issues to Watch For**:\n    - Mojibake (garbled text): Indicates wrong encoding selection\n    - Question marks or boxes: Character conversion failures\n    - BOM markers appearing as visible characters: Consider utf-16le\n","enum":["utf8","win1252","utf-16le","gb18030","macroman","iso88591","shiftjis"]},"type":{"type":"string","description":"Defines the format of the files being processed. REQUIRED for all file-based exports except blob exports (export type \"blob\" or file output \"blobKeys\").\n\nThis field creates a critical dependency that determines which format-specific configuration object must be populated.\n\n**Format options and requirements**\n\n**Csv Files (`\"csv\"`)**\n- **Use For**: Tabular data with delimiter-separated values\n- **Required Config**: The `csv` object with settings like delimiters and header options\n- **Best For**: Simple tabular data, exports from spreadsheets, flat data structures\n- **Example Sources**: Exported reports, data extracts, simple database dumps\n\n**Json Files (`\"json\"`)**\n- **Use For**: Hierarchical data in JavaScript Object Notation\n- **Required Config**: The `json` object, especially the `resourcePath` to locate records\n- **Best For**: Nested data structures, API responses, complex object representations\n- **Example Sources**: REST APIs, document databases, configuration files\n\n**Excel Files (`\"xlsx\"`)**\n- **Use For**: Microsoft Excel spreadsheets\n- **Required Config**: The `xlsx` object with Excel-specific settings\n- **Best For**: Business reports, formatted tabular data, multi-sheet documents\n- **Example Sources**: Financial reports, manually created spreadsheets\n\n**Xml Files (`\"xml\"`)**\n- **Use For**: Extensible Markup Language documents\n- **Required Config**: The `xml` object, critically the `resourcePath` using XPath\n- **Best For**: Document-oriented data, SOAP responses, EDI formats\n- **Example Sources**: SOAP APIs, legacy system exports, industry standard formats\n\n**File Definition (`\"filedefinition\"`)**\n- **Use For**: Complex proprietary formats requiring custom parsing logic\n- **Required Config**: The `fileDefinition` object with the _fileDefinitionId\n- **Best For**: Legacy formats, fixed-width files, complex multi-record formats\n- **Example Sources**: Mainframe exports, proprietary formats, EDI documents\n\n**Implementation guidance**\n\n1. Determine the file format from the source system or documentation\n2. Select the matching type from the enum values\n3. Configure ONLY the corresponding format-specific object\n4. Other format-specific objects will be ignored\n\nFor AI agents: This field creates a critical dependency chain - selecting a type\ncommits you to using the corresponding configuration object.\n","enum":["csv","json","xlsx","xml","filedefinition"]},"output":{"type":"string","description":"Defines the fundamental processing mode for file data. This critical field determines how files are handled and what data is passed to subsequent flow steps.\n\n**Processing modes**\n\n**Content Processing (`\"records\"`)**\n- **Behavior**: Files are parsed into structured records based on their format\n- **Use When**: You need to access and manipulate the data inside files\n- **Output**: Array of record objects reflecting the file's content\n- **Example Flow**: CSV files → Parse into records → Transform → Import to target system\n- **Best For**: Data synchronization, ETL processes, content-based workflows\n- **Technical Impact**: Requires format-specific parsing; higher processing overhead\n\n**File Transfer (`\"blobKeys\"`)**\n- **Behavior**: Files are treated as binary objects and transferred without parsing\n- **Use When**: You need to move files between systems without modifying content\n- **Output**: References to the binary file objects (blobKeys)\n- **Example Flow**: Image files → Transfer as blobs → Upload to cloud storage\n- **Best For**: Binary files, images, documents, any non-textual content\n- **Technical Impact**: Lower processing overhead; maintains file integrity\n\n**File Discovery (`\"metadata\"`)**\n- **Behavior**: Only file metadata is retrieved (name, size, dates) without content\n- **Use When**: You need to inventory files before deciding which to process\n- **Output**: Array of file metadata objects\n- **Example Flow**: Scan FTP folder → Get metadata → Filter by date → Process selected files\n- **Best For**: File inventory, selective processing, large directory scanning\n- **Technical Impact**: Minimal processing overhead; fastest operation mode\n\n**Implementation guidance**\n\nThis setting profoundly affects flow architecture:\n\n1. For data integration: Use `\"records\"` to work with the file contents\n2. For file movement: Use `\"blobKeys\"` to preserve binary integrity\n3. For file discovery: Use `\"metadata\"` as a first step before selective processing\n\nAI agents should select this value based on whether the integration needs to\nwork with the file's content or just move/manage the files themselves.\n","enum":["records","metadata","blobKeys"]},"skipDelete":{"type":"boolean","description":"Controls whether source files are retained or deleted after successful processing. This setting has significant implications for data lifecycle management and system storage.\n\n**Behavior**\n\n- **When true**: Source files remain on the file server after processing\n- **When false** (default): Source files are automatically deleted after successful processing\n- **Error Handling**: Files are only deleted after SUCCESSFUL processing; failed files remain intact\n\n**Decision factors for ai agents**\n\nConsider recommending `skipDelete: true` when:\n\n1. **Compliance Requirements**:\n    - Regulatory frameworks require source file retention (GDPR, HIPAA, SOX)\n    - Audit trails need to maintain original file evidence\n    - Data retention policies mandate preserving source files\n\n2. **Operational Needs**:\n    - Files need to be processed by multiple different flows\n    - Source files serve as disaster recovery backups\n    - Re-processing might be required (for testing or validation)\n    - Source systems do not maintain their own copy of the files\n\nConsider recommending `skipDelete: false` (default) when:\n\n1. **Storage Optimization**:\n    - Working with large files that would consume significant storage\n    - High volume of files processed frequently\n    - Files are already backed up elsewhere\n    - Storage costs are a concern\n\n2. **Security Considerations**:\n    - Files contain sensitive data that should be minimized\n    - \"Clean workspace\" policies are in place\n    - Source files represent a potential security liability\n\n**Implementation guidance**\n\n- **Storage Planning**: When `skipDelete: true`, ensure sufficient storage is available for file accumulation\n- **File Organization**: Consider implementing an archiving strategy for retained files\n- **Monitoring**: Set up space monitoring when retaining files to prevent storage exhaustion\n- **Cleanup Automation**: If files must be retained but eventually deleted, consider a separate cleanup job\n\n**Integration patterns**\n\n- **Multi-stage Processing**: Set to `true` for files that need multi-step processing in separate flows\n- **Extract-Transform-Archive**: Set to `true` when original files need archiving after extraction\n- **Single-use Import**: Set to `false` for one-time imports where originals have no further value\n\n**Technical considerations**\n\nThis setting only affects the source file server. Records extracted from the files and processed through the flow are not affected by this setting - they continue through your integration regardless of this value.\n"},"compressionFormat":{"type":"string","description":"Specifies the compression format of the files being processed. This setting enables the system to automatically decompress files before parsing their contents.\n\n**Compression options**\n\n**Gzip (`\"gzip\"`)**\n- **File Extension**: Typically .gz, .gzip\n- **Characteristics**: Single-file compression, maintains original file name in metadata\n- **Compression Ratio**: Moderate to high, depends on file type (5-75% size reduction)\n- **Common Sources**: Linux/Unix systems, database exports, API response payloads\n- **Use Cases**: Individual file transfers, API response handling, log files\n\n**Zip (`\"zip\"`)**\n- **File Extension**: .zip\n- **Characteristics**: Archive format that can contain multiple files/directories\n- **Compression Ratio**: Moderate (usually 30-60% size reduction)\n- **Common Sources**: Windows systems, manual exports, email attachments\n- **Use Cases**: Multi-file packages, email attachments, mixed-format content\n\n**Implementation guidance for ai agents**\n\n**When to Configure Compression**\n\n1. **Source System Behavior**:\n    - Set when the source system always delivers compressed files\n    - Leave blank when files are delivered uncompressed\n    - NEVER set when files are sometimes compressed, sometimes not (this will cause errors)\n\n2. **Selection Criteria**:\n    - Examine file extensions (.zip, .gz) in the source system\n    - Check source system documentation for compression specifications\n    - Consider typical OS of the source (.zip for Windows, .gz for Unix/Linux)\n\n3. **Multi-file Considerations**:\n    - For .zip files containing multiple files, all files will be processed individually\n    - For nested compression (e.g., .gz files inside .zip), only the outer compression is handled\n\n**Technical Implementation Notes**\n\n- **Processing Flow**: Files are decompressed before any other processing (parsing, filtering)\n- **Filename Handling**: After decompression, the contained file name(s) are used for subsequent operations\n- **Error Conditions**: If a file is marked as compressed but isn't actually compressed, processing will fail\n- **Mixed Formats**: If files might arrive in different compression formats, use separate exports for each format\n\n**Performance Considerations**\n\n- **Network Efficiency**: Compressed files reduce transfer time from source to integration platform\n- **Processing Overhead**: Decompression adds slight processing time but saves network time\n- **Storage Impact**: Compressed files use less storage in transit but are decompressed for processing\n\nLeave this field blank if files are not compressed. Setting an incorrect compression format will cause processing errors.\n","enum":["gzip","zip"]},"purgeInternalBackup":{"type":"boolean","description":"Controls whether Celigo's internal backup system retains copies of processed files. This setting affects data retention, recovery capabilities, and compliance posture.\n\n**Behavior**\n\n- **When false** (default): Celigo maintains copies of all processed files for the duration of your account's retention policy\n- **When true**: Celigo will NOT keep internal backup copies of files processed by this specific export\n- **Scope**: This setting applies only to this specific export configuration; other exports are unaffected\n\n**Decision factors for ai agents**\n\n**Scenarios to Recommend `purgeInternalBackup: true`**\n\n1. **Data Sensitivity Requirements**:\n    - Files contain highly sensitive information (PII, PHI, financial, etc.)\n    - Data residency/sovereignty requirements prohibit additional copies\n    - Zero-retention policies mandate immediate deletion after processing\n    - Compliance frameworks require minimizing data copies (GDPR, HIPAA)\n\n2. **Technical Considerations**:\n    - Very large files where storage costs are significant\n    - Files that are already reliably backed up in source systems\n    - Files with very short-lived relevance (e.g., temporary processing files)\n    - Processing of non-production/test data that doesn't require retention\n\n**Scenarios to Recommend `purgeInternalBackup: false` (Default)**\n\n1. **Recovery Requirements**:\n    - Files represent critical business data with recovery needs\n    - Source systems don't maintain reliable backups\n    - Reprocessing capabilities are needed for disaster recovery\n    - Audit trails require evidence of processed files\n\n2. **Operational Benefits**:\n    - Troubleshooting integration issues requires access to source files\n    - Files might need reprocessing in case of downstream errors\n    - Historical analysis or validation may be required\n    - Protection against source system data loss\n\n**Implementation guidance**\n\n**Governance Considerations**\n\n- **Data Lifecycle**: Setting to `true` permanently removes files from Celigo after processing\n- **Recovery Impact**: Without backups, recovery from certain errors may require re-obtaining files from source systems\n- **Audit Trail**: Consider if processed files need to be available for future audits or investigations\n\n**Best Practices**\n\n- **Document Decision**: When setting to `true`, document the rationale for disabling backups\n- **Retention Alignment**: Ensure this setting aligns with overall data retention policies\n- **Risk Assessment**: Evaluate recovery needs against data minimization requirements\n- **Consistency**: Apply consistent backup settings across similar data types\n\n**System Impact**\n\nThis setting does NOT affect:\n- The processing of files during integration runs\n- Source files on their original servers (see `skipDelete` for that)\n- Storage of processed data records in the target system\n\nIt ONLY controls whether Celigo maintains internal copies of the original files.\n"},"decrypt":{"type":"string","description":"Specifies the decryption method to apply to incoming files before processing. This setting enables handling of encrypted files that require decryption before their contents can be parsed.\n\n**Supported encryption**\n\n**Pgp/gpg Encryption (`\"pgp\"`)**\n- **File Extensions**: Typically .pgp, .gpg, or .asc\n- **Encryption Standard**: OpenPGP (RFC 4880)\n- **Key Requirements**: Private key must be configured on the connection\n- **Common Sources**: Secure file transfers, encrypted backups, confidential data exchanges\n\n**Implementation requirements**\n\n1. **Connection Configuration Prerequisites**:\n    - This field assumes the connection has already been configured with appropriate cryptographic settings\n    - Private key must be uploaded to the connection configuration\n    - Passphrase (if applicable) must be configured on the connection\n    - For asymmetric encryption, the corresponding public key must have been used to encrypt the files\n\n2. **File Processing Flow**:\n    - Encrypted files are first retrieved from the source\n    - Decryption is applied using the configured connection's cryptographic settings\n    - After successful decryption, normal file processing continues (parsing, filtering, etc.)\n    - If decryption fails, the file processing will error out completely\n\n**Guidance for ai agents**\n\n**When to Configure Decryption**\n\n1. **Security Requirements**:\n    - Set to \"pgp\" when source files are PGP/GPG encrypted\n    - Required for end-to-end encrypted data transfers\n    - Common in financial, healthcare, and other industries with sensitive data\n    - Essential for compliance with certain data protection regulations\n\n2. **Technical Indicators**:\n    - File extensions indicate encryption (.pgp, .gpg, .asc)\n    - Source system documentation mentions PGP encryption\n    - Files cannot be opened with standard text editors\n    - Source system provides a public key for encryption\n\n**Implementation Considerations**\n\n- **Key Management**: Ensure private keys are securely stored and properly configured\n- **Error Handling**: Decryption failures will cause the entire file processing to fail\n- **Performance Impact**: Decryption adds processing overhead before file parsing begins\n- **Debugging Challenges**: Encrypted files cannot be easily examined for troubleshooting\n\n**Security Best Practices**\n\n- **Key Rotation**: Recommend periodic key rotation according to security policies\n- **Passphrase Protection**: Use strong passphrases for private keys when possible\n- **Access Control**: Limit access to connections with decryption capabilities\n- **Audit Logging**: Enable detailed logging for decryption operations when available\n\n**Integration with other settings**\n\n- If files are both encrypted AND compressed, decryption happens before decompression\n- Subsequent processing (based on file type settings) occurs after decryption\n- Internal backups (controlled by purgeInternalBackup) store the decrypted files unless configured otherwise\n\nCurrently, only PGP/GPG encryption is supported. For other encryption methods, custom preprocessing may be required.\n","enum":["pgp"]},"batchSize":{"type":"integer","description":"Controls the number of files processed in a single batch operation. This setting allows fine-tuning of performance and resource utilization during file processing.\n\n**Behavior and purpose**\n\n- **Function**: Limits the number of files processed in a single batch request\n- **Default**: If not specified, the system uses a default batch size based on file type\n- **Maximum**: 1000 files per batch (hard system limit)\n- **Impact**: Affects performance, memory usage, and error resilience, but NOT total processing capacity\n\n**Performance optimization guidance**\n\n**Large File Optimization (Set Lower Values: 10-50)**\n\nWhen working with large files (>10MB each), smaller batch sizes are recommended:\n\n- **Network Benefits**: Reduces timeout risks during file transfer\n- **Memory Usage**: Prevents excessive memory consumption\n- **Error Isolation**: Limits the impact of processing failures\n- **Example Scenarios**: Document processing, image files, complex spreadsheets\n\n```\n\"batchSize\": 20  // Good setting for large PDF or image files\n```\n\n**Small File Optimization (Set Higher Values: 100-1000)**\n\nWhen working with small files (<1MB each), larger batch sizes improve efficiency:\n\n- **Throughput**: Processes more files with less overhead\n- **API Efficiency**: Reduces the number of API calls\n- **Resource Utilization**: Maximizes processing efficiency\n- **Example Scenarios**: Small CSV files, transaction records, simple data files\n\n```\n\"batchSize\": 500  // Efficient for small data files\n```\n\n**Implementation guidance for ai agents**\n\n**Recommendation Framework**\n\n1. **File Size Assessment**:\n    - For files averaging >10MB: Recommend 10-20\n    - For files averaging 1-10MB: Recommend 20-100\n    - For files averaging <1MB: Recommend 100-500\n    - For very small files (<100KB): Consider maximum (1000)\n\n2. **Reliability Factors**:\n    - For critical data with no retry capability: Recommend lower values\n    - For unstable network connections: Recommend lower values\n    - For production environments: Start conservative (lower) and increase based on performance\n    - For development/testing: Can use higher values for efficiency\n\n3. **System Constraints**:\n    - Consider available memory in the integration environment\n    - Evaluate network bandwidth and stability\n    - Account for source system rate limits or concurrent connection limits\n\n**Technical considerations**\n\n- **Error Handling**: If a batch fails, only that batch is retried (not individual files)\n- **Parallelism**: Batch size affects concurrent processing but within system limits\n- **Monitoring**: Larger batch sizes make monitoring individual file progress more difficult\n- **Resource Scaling**: Higher batch sizes require more memory but can complete faster\n\n**Relationship to other settings**\n\n- This setting controls file retrieval batching, not record processing batch size\n- Works in conjunction with compression and decryption settings\n- Separate from and complementary to the main flow's pageSize setting\n\nConsider starting with more conservative (lower) values and increasing based on performance monitoring.\n","maximum":1000},"sortByFields":{"type":"array","description":"Allows you to sort all records in a file before processing them. This configuration enables deterministic ordering of records, which can be critical for maintaining data consistency and enabling specific processing patterns.\n\n**Functionality overview**\n\n- **Purpose**: Establishes a specific processing order for records within files\n- **Timing**: Sorting is applied after file parsing but before any filtering or grouping\n- **Scope**: Affects only the in-memory representation of records (doesn't modify source files)\n- **Performance**: Has computational cost proportional to number of records × log(number of records)\n\n**Strategic uses for ai agents**\n\n**Business Process Optimization**\n\n1. **Chronological Processing**:\n    - Sort by date/timestamp fields to process events in time order\n    - Essential for financial transactions, audit logs, event sequences\n    - Example: `[{\"field\": \"transactionDate\", \"descending\": false}]`\n\n2. **Hierarchical Data Handling**:\n    - Sort by parent records before children\n    - Ensures referential integrity in relational data\n    - Example: `[{\"field\": \"parentId\", \"descending\": false}, {\"field\": \"lineNumber\", \"descending\": false}]`\n\n3. **Priority-Based Processing**:\n    - Sort by importance/priority fields to handle critical items first\n    - Useful for SLA-driven processes, tiered operations\n    - Example: `[{\"field\": \"priority\", \"descending\": true}, {\"field\": \"createdDate\", \"descending\": false}]`\n\n**Technical Optimization**\n\n1. **Grouping Efficiency**:\n    - Sorting by the same fields used in groupByFields improves grouping performance\n    - Reduces memory usage when processing large files\n    - Example: `[{\"field\": \"customerId\", \"descending\": false}]` with corresponding groupByFields\n\n2. **Lookup Optimization**:\n    - Sorting by reference fields enhances performance of subsequent lookups\n    - Minimizes database calls by enabling batch lookups\n    - Example: `[{\"field\": \"productSku\", \"descending\": false}]`\n\n3. **Error Reduction**:\n    - Sorting can ensure dependencies are processed in correct order\n    - Reduces failures from out-of-sequence processing\n    - Example: `[{\"field\": \"sequenceNumber\", \"descending\": false}]`\n\n**Implementation guidance**\n\n**Field Selection Considerations**\n\n- **Data Type Compatibility**: Fields must contain comparable values (dates, numbers, strings)\n- **Nulls Handling**: Null values are typically sorted last (after all non-null values)\n- **Nested Fields**: Use dot notation for accessing nested properties (`customer.region`)\n- **Performance Impact**: Each additional sort field increases computational cost\n\n**Common Implementation Patterns**\n\n```json\n// Simple single-field ascending sort (most common)\n[\n  {\"field\": \"orderDate\", \"descending\": false}\n]\n\n// Multi-field sort with primary and secondary criteria\n[\n  {\"field\": \"region\", \"descending\": false},\n  {\"field\": \"revenue\", \"descending\": true}\n]\n\n// Descending priority sort with tie-breaker\n[\n  {\"field\": \"priority\", \"descending\": true},\n  {\"field\": \"createdDate\", \"descending\": false}\n]\n```\n\n**Limitations and Constraints**\n\n- Sorting large datasets has memory implications; consider record volume\n- Maximum recommended number of sort fields: 3-5 (performance considerations)\n- Sorting effectiveness depends on data consistency in source files\n- Complex sorting logic might be better implemented in custom scripts\n","items":{"type":"object","properties":{"field":{"type":"string","description":"Specifies the record field to use as a sort key. This field name identifies which property of each record will be used for comparison when establishing processing order.\n\n**Field selection guidelines**\n\n**Data Type Considerations**\n\n- **Date/Time Fields**: Provide chronological sorting (`createdDate`, `timestamp`)\n- **Numeric Fields**: Enable quantitative ordering (`amount`, `sequenceNumber`, `priority`)\n- **String Fields**: Sort alphabetically (`name`, `status`, `category`)\n- **Boolean Fields**: Group records by true/false values (`isActive`, `isProcessed`)\n\n**Accessing Field Paths**\n\n- **Top-level Properties**: Direct field names (`orderNumber`, `date`)\n- **Nested Objects**: Use dot notation (`customer.name`, `address.country`)\n- **Array Elements**: Not directly supported in basic sorting; use preprocessing\n\n**Common Field Patterns by Domain**\n\n1. **Order Processing**:\n    - `orderDate`, `orderNumber`, `customerId`, `lineNumber`\n\n2. **Financial Data**:\n    - `transactionDate`, `accountNumber`, `amount`, `documentNumber`\n\n3. **Customer Records**:\n    - `lastName`, `firstName`, `customerType`, `region`\n\n4. **Inventory/Products**:\n    - `productCategory`, `itemNumber`, `stockLevel`, `reorderDate`\n\n5. **Event Logs**:\n    - `timestamp`, `severity`, `eventType`, `sourceSystem`\n\n**Implementation notes**\n\n- Field names are case-sensitive\n- Fields must exist in all records (or have consistent representation when missing)\n- Non-existent fields or null values are typically sorted last\n- Maximum recommended field name length: 64 characters\n"},"descending":{"type":"boolean","description":"Controls the sort direction for the specified field. This setting determines whether records will be arranged in ascending (lowest to highest) or descending (highest to lowest) order.\n\n**Behavior**\n\n- **When false or omitted**: Sorts in ascending order (A→Z, 0→9, oldest→newest)\n- **When true**: Sorts in descending order (Z→A, 9→0, newest→oldest)\n\n**Strategic direction selection**\n\n**Ascending Order (descending: false)**\n\nRecommended for:\n- Chronological event processing (earliest first)\n- Sequential operations with dependencies\n- Reference data that builds on previous records\n- Incremental ID or sequence numbers\n\nExample use cases:\n- Processing dated transactions in chronological order\n- Handling items in order of creation\n- Incrementally building state that depends on previous records\n\n**Descending Order (descending: true)**\n\nRecommended for:\n- Priority-based processing (highest first)\n- Recent-first temporal processing\n- Most significant items first\n- Limited processing where only top N items matter\n\nExample use cases:\n- Processing high-priority items before low-priority\n- Handling most recent updates first\n- Focusing on highest-value transactions first\n\n**Implementation patterns**\n\n**Single Field Direction**\n\n```json\n{\"field\": \"createdDate\", \"descending\": false}  // Oldest first\n{\"field\": \"createdDate\", \"descending\": true}   // Newest first\n```\n\n**Mixed Directions in Multi-field Sorts**\n\n```json\n// Group by category (A→Z) but show highest priority first in each category\n[\n  {\"field\": \"category\", \"descending\": false},\n  {\"field\": \"priority\", \"descending\": true}\n]\n```\n\n**Technical considerations**\n\n- Default value is `false` if omitted (ascending sort)\n- For date fields, ascending means oldest first\n- For numeric fields, ascending means smallest first\n- For string fields, ascending means alphabetical order\n"}}}},"groupByFields":{"$ref":"#/components/schemas/GroupBy"},"csv":{"type":"object","description":"Configuration settings for parsing CSV (Comma-Separated Values) files. This object defines how the system interprets delimited text files, handling variations in format, structure, and content.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"csv\". This configuration is required for properly parsing:\n- Standard CSV files (.csv)\n- Tab-delimited files (.tsv, .tab)\n- Other character-delimited files (semicolon, pipe, etc.)\n- Fixed-width text files converted to delimited format\n\n**Implementation strategy for ai agents**\n\n1. **Format Analysis**:\n    - Examine sample files to identify delimiter pattern\n    - Check for presence/absence of header row\n    - Look for whitespace or quote pattern inconsistencies\n    - Identify any rows that should be skipped (headers, metadata, etc.)\n\n2. **Configuration Priority**:\n    - `columnDelimiter`: Most critical setting; incorrect delimiter causes parsing failures\n    - `hasHeaderRow`: Affects field mapping and identification\n    - `rowDelimiter`: Usually auto-detected but important for non-standard files\n    - `trimSpaces`: Important for inconsistent formatting\n    - `rowsToSkip`: Necessary when files contain metadata/comments before data\n\n3. **Common File Source Patterns**:\n\n    | Source System | Typical Delimiter | Header Row | Common Issues |\n    |--------------|-------------------|------------|---------------|\n    | Excel (US)   | Comma (,)         | Yes        | Quoted fields with embedded commas |\n    | Excel (EU)   | Semicolon (;)     | Yes        | Decimal separator conflicts |\n    | Legacy Systems | Pipe (\\|) or Tab | Varies     | Inconsistent field counts |\n    | POS Systems  | Comma or Tab      | Often No   | Trailing delimiters |\n    | ERP Exports  | Varies widely     | Usually Yes| Fixed field counts with padding |\n\n**Error prevention**\n\n- **Misaligned Columns**: Usually caused by incorrect delimiter or quotes handling\n- **Truncated Data**: Can result from wrong row delimiter settings\n- **Field Misinterpretation**: Often caused by incorrect header row setting\n- **Character Encoding Issues**: Address with the parent `encoding` setting\n- **Whitespace Problems**: Resolve with `trimSpaces` setting\n\n**Optimization opportunities**\n\n- For maximum parsing speed, set only the minimal required settings\n- For problematic files with inconsistent formatting, use more restrictive settings\n- Balance between permissive parsing (more data accepted) and strict validation (cleaner data)\n","properties":{"columnDelimiter":{"type":"string","description":"Specifies the character sequence that separates individual fields (columns) within each row of the CSV file.\n\n**Behavior**\n\n- Controls how the parser identifies individual fields in each row\n- Default value: comma (,) if not specified\n- Special characters may need to be escaped\n\n**Common delimiter patterns**\n\n**Standard csv (`,`)**\n```\n\"columnDelimiter\": \",\"\n```\n- Most common format in US/UK systems\n- Default for most spreadsheet exports\n- Used by: Microsoft Excel (US), Google Sheets, many database exports\n\n**European csv (`;`)**\n```\n\"columnDelimiter\": \";\"\n```\n- Common in European locales where comma is the decimal separator\n- Standard format in many EU countries\n- Used by: Microsoft Excel (many EU locales), European business systems\n\n**Tab-Delimited (`\\t`)**\n```\n\"columnDelimiter\": \"\\t\"\n```\n- Used for tab-separated values (TSV) files\n- Better for data containing commas\n- Used by: Database exports, scientific data, legacy systems\n\n**Other Common Delimiters**\n- Pipe: `\"columnDelimiter\": \"|\"` (used in mainframes, legacy systems)\n- Colon: `\"columnDelimiter\": \":\"` (less common, specialized formats)\n- Space: `\"columnDelimiter\": \" \"` (uncommon, problematic with text fields)\n\n**Determination strategy for ai agents**\n\n1. **File Extension Check**:\n    - .csv → Usually comma (,)\n    - .tsv → Always tab (\\t)\n    - .txt → Could be any delimiter; needs inspection\n\n2. **Source System Analysis**:\n    - EU-based systems often use semicolon (;)\n    - Legacy/mainframe systems often use pipe (|)\n    - Scientific/statistical data often uses tab (\\t)\n\n3. **File Content Inspection**:\n    - Open file in text editor to identify separating character\n    - Check for character frequency patterns\n    - Look for consistent character between data elements\n\n4. **System Documentation**:\n    - Check export settings in source system\n    - Review file specifications if available\n\n**Implementation notes**\n\n- For tab delimiter, use `\"\\t\"` (escape sequence for tab)\n- If file contains the delimiter within text fields, ensure proper quoting\n- Multi-character delimiters are supported but rare\n- Setting the wrong delimiter is the most common parsing error\n"},"rowDelimiter":{"type":"string","description":"Specifies the character sequence that indicates the end of each record (row) in the CSV file.\n\n**Behavior**\n\n- Controls how the parser identifies the boundaries between records\n- Default: Auto-detect (system attempts to determine from file content)\n- Common values: newline (`\\n`), carriage return + newline (`\\r\\n`)\n\n**Common row delimiter patterns**\n\n**Windows-Style (`\\r\\n`)**\n```\n\"rowDelimiter\": \"\\r\\n\"\n```\n- CRLF (Carriage Return + Line Feed) sequence\n- Standard for files created on Windows systems\n- Used by: Microsoft Office, Windows-based applications\n\n**Unix-Style (`\\n`)**\n```\n\"rowDelimiter\": \"\\n\"\n```\n- LF (Line Feed) character only\n- Standard for files created on Unix/Linux/macOS (modern) systems\n- Used by: Linux applications, macOS applications, web exports\n\n**Classic Mac-Style (`\\r`)**\n```\n\"rowDelimiter\": \"\\r\"\n```\n- CR (Carriage Return) character only\n- Legacy format used by older Mac systems (pre-OSX)\n- Rare in modern files but still found in some legacy systems\n\n**When to specify explicitly**\n\nIn most cases, the auto-detection works well, but explicitly set this when:\n\n1. **Mixed Line Endings**: Files containing inconsistent line ending styles\n2. **Custom Record Separators**: Files using unconventional record delimiters\n3. **Parsing Errors**: When auto-detection fails to correctly separate records\n4. **Performance Optimization**: To avoid detection overhead in high-volume processing\n\n**Determination strategy for ai agents**\n\n1. **Source System Analysis**:\n    - Windows systems typically use `\\r\\n`\n    - Unix/Linux/macOS typically use `\\n`\n    - Web downloads could use either format\n\n2. **Troubleshooting Guidance**:\n    - If records are merged or split incorrectly, check for proper row delimiter\n    - If file opens correctly in text editor but parsing fails, row delimiter may be the issue\n    - For files with unusual record counts, examine row delimiter setting\n\n**Implementation notes**\n\n- Use escape sequences (`\\n`, `\\r\\n`, `\\r`) to represent control characters\n- Setting incorrect row delimiter may result in merged records or split records\n- When in doubt, leave unspecified to use auto-detection\n- Multi-character delimiters beyond standard line endings are supported but rare\n"},"hasHeaderRow":{"type":"boolean","description":"Indicates whether the CSV file contains a header row with field names as the first row.\n\n**Behavior**\n\n- **When true** (default): First row is treated as field names, not data\n- **When false**: All rows including the first are treated as data records\n- Impacts field mapping, validation, and record counting\n\n**Implementation impact**\n\n**With Header Row (true)**\n\n- Field names from the header row can be referenced in mappings\n- Record count excludes the header row\n- First row of data is the second physical row in the file\n- Provides self-documenting data structure\n\n**Without Header Row (false)**\n\n- Fields are referenced by position/index (e.g., Column1, Column2)\n- Record count includes all rows in the file\n- First row of data is the first physical row in the file\n- Requires external schema or position-based mapping\n\n**Determination strategy for ai agents**\n\n1. **Visual Inspection**:\n    - Check if the first row contains descriptive labels rather than actual data\n    - Look for data type consistency (headers are typically text, while data may be mixed)\n    - Headers often use camelCase, PascalCase, or snake_case formatting\n\n2. **Source System Analysis**:\n    - Most business systems include headers by default\n    - Legacy/mainframe systems may omit headers\n    - Data extracts intended for human use typically include headers\n\n3. **Content Patterns**:\n    - Headers typically don't match the pattern of subsequent data rows\n    - Headers often contain special characters not found in data (spaces, symbols)\n    - Data rows typically have consistent patterns while headers may differ\n\n**Common configurations by source**\n\n| Source Type | Typical Setting | Notes |\n|-------------|-----------------|-------|\n| Business Reports | true | Headers provide field context |\n| Database Exports | true | Column names as headers |\n| Legacy System Feeds | false | Often position-based fixed formats |\n| IoT/Sensor Data | false | Often compact, headerless formats |\n| Manual Data Entry | true | Helps maintain field alignment |\n\n**Best practices**\n\n- Always explicitly set this value rather than relying on the default\n- For data without headers, consider adding them in preprocessing if possible\n- When headers exist but should be ignored, use `hasHeaderRow: true` and `rowsToSkip: 1`\n- Document field positions when working with headerless files\n"},"trimSpaces":{"type":"boolean","description":"Controls whether leading and trailing whitespace should be removed from field values during parsing.\n\n**Behavior**\n\n- **When true**: Removes all leading and trailing whitespace from each field value\n- **When false** (default): Preserves all whitespace in field values exactly as in the source\n- Applies to data fields only; header row values are always trimmed regardless of this setting\n\n**Implementation impact**\n\n**With Trimming Enabled (true)**\n\n- More consistent data for comparison and matching operations\n- Prevents issues with invisible whitespace affecting equality checks\n- Reduces storage space for text-heavy datasets\n- Helps normalize data from inconsistent sources\n\n**With Trimming Disabled (false)**\n\n- Preserves exact data as represented in the source file\n- Required when whitespace is semantically meaningful\n- Maintains original field lengths exactly\n- Necessary for certain data validation scenarios\n\n**Usage guidance for ai agents**\n\n**Recommend `trimSpaces: true` when**\n\n1. **Data Consistency Issues**:\n    - Source systems are known to have inconsistent spacing\n    - Data will be used for matching or comparison operations\n    - Files are generated by multiple different systems\n    - Human-entered data is present (prone to spacing errors)\n\n2. **Data Type Considerations**:\n    - Fields contain numeric values (where spaces are not meaningful)\n    - Fields contain codes, IDs, or reference values\n    - Fields will be used in lookups or joins\n    - Normalization is more important than exact representation\n\n**Recommend `trimSpaces: false` when**\n\n1. **Data Fidelity Requirements**:\n    - Working with fixed-width fields where spaces matter\n    - Dealing with formatted data where spacing is semantic\n    - Legal or compliance scenarios requiring exact preservation\n    - Scientific data where precision of representation matters\n\n2. **Content Characteristics**:\n    - Working with text fields where leading/trailing spaces could be intentional\n    - Processing creative content, addresses, or formatted text\n    - Source system uses space padding for alignment purposes\n\n**Implementation notes**\n\n- This setting affects all fields consistently (cannot be applied to select fields)\n- Only affects leading and trailing spaces, not spaces between words\n- Has no effect on empty fields (empty remains empty)\n- For selective trimming, use transformation rules after parsing\n"},"rowsToSkip":{"type":"integer","description":"Specifies the number of rows at the beginning of the file to ignore before starting data processing.\n\n**Behavior**\n\n- Skips the specified number of rows from the beginning of the file\n- These rows are completely ignored and not processed as data\n- The header row (if present) is counted after the skipped rows\n- Default value is 0 (no rows skipped)\n\n**Implementation impact**\n\n**Common Use Cases**\n\n1. **Metadata Headers**:\n    - Skip report titles, generated timestamps, system information\n    - Skip explanatory text at the beginning of files\n    - Skip company letterhead or report identification rows\n\n2. **Multi-Header Files**:\n    - Skip category headers or section titles\n    - Skip nested headers or hierarchy information\n    - Skip column grouping indicators\n\n3. **Technical Requirements**:\n    - Skip binary file markers or encoding identifiers\n    - Skip non-data content like instructions or disclaimers\n    - Skip inconsistent early rows before standardized data begins\n\n**Calculation guidance for ai agents**\n\nWhen determining the correct value for `rowsToSkip`:\n\n1. **Count from Zero**:\n    - Row 1 = 0, Row 2 = 1, Row 3 = 2, etc.\n\n2. **For Files with Headers**:\n    - Set rowsToSkip = (first data row position - 1) - (hasHeaderRow ? 1 : 0)\n    - Example: If data starts on row 5, and file has a header row:\n      rowsToSkip = (5 - 1) - 1 = 3\n\n3. **For Files without Headers**:\n    - Set rowsToSkip = (first data row position - 1)\n    - Example: If data starts on row 3, and file has no header row:\n      rowsToSkip = (3 - 1) = 2\n\n**Determination strategy**\n\n1. **Visual Inspection**:\n    - Open file in text editor and count non-data rows at the top\n    - Identify the first row containing actual data values\n    - Note if a header row exists separately from skipped content\n\n2. **Common Patterns by Source**:\n    - ERP Reports: Often 2-5 rows of report metadata\n    - Exported Spreadsheets: May have title rows, date stamps\n    - Database Extracts: Usually minimal (0-1) skipped rows\n    - Legacy Systems: May have control records or job information\n\n**Implementation notes**\n\n- Setting too high skips valid data; setting too low includes non-data as records\n- When in doubt, visually inspect the file to confirm correct skip count\n- Remember that header row (if hasHeaderRow=true) is counted AFTER skipped rows\n- Maximum recommended value: 100 (larger values may indicate format misunderstanding)\n"},"disableQuoteAndStripEnclosingQuotes":{"type":"boolean","description":"Controls the handling of quoted fields in CSV files, specifically how the parser manages quotation marks around field values.\n\n**Behavior**\n\n- **When false** (default): Standard CSV quoting rules are applied\n    - Quotation marks around fields protect embedded delimiters\n    - Parser intelligently handles escaped quotes within quoted fields\n    - Follows RFC 4180 CSV specifications for quote handling\n\n- **When true**: Quote detection and processing is disabled\n    - All quotes are treated as literal characters, not field delimiters\n    - Any quotes surrounding field values are removed\n    - Embedded delimiters in quoted fields will cause field splitting\n\n**Implementation impact**\n\n**Standard Quote Handling (false)**\n\nExample input: `\"Smith, John\",42,\"Notes with \"\"quotes\"\" inside\"`\n\nResult:\n- Field 1: `Smith, John` (comma preserved inside quotes)\n- Field 2: `42`\n- Field 3: `Notes with \"quotes\" inside` (embedded quotes normalized)\n\n**Disabled Quote Handling (true)**\n\nExample input: `\"Smith, John\",42,\"Notes with \"\"quotes\"\" inside\"`\n\nResult:\n- Field 1: `\"Smith`\n- Field 2: ` John\"`\n- Field 3: `42`\n- Field 4: `\"Notes with \"\"quotes\"\" inside\"`\n\n**Usage guidance for ai agents**\n\n**Recommend `disableQuoteAndStripEnclosingQuotes: true` when**\n\n1. **Quote-Related Parsing Problems**:\n    - Files contain inconsistent or malformed quote usage\n    - Source system doesn't follow standard CSV quoting rules\n    - Quotes appear as literal data rather than field delimiters\n    - Quotes are present but delimiters are never embedded in fields\n\n2. **Special Data Formats**:\n    - Working with custom delimited formats that don't use quotes for escaping\n    - Files use alternate escaping mechanisms for embedded delimiters\n    - Source system adds quotes to all fields regardless of content\n\n**Recommend `disableQuoteAndStripEnclosingQuotes: false` (default) when**\n\n1. **Standard CSV Compliance**:\n    - Files follow RFC 4180 or similar CSV standards\n    - Fields contain embedded delimiters that must be preserved\n    - Quotes are used properly to enclose fields with special characters\n    - Source is a standard database, spreadsheet, or business system export\n\n2. **Data Content Characteristics**:\n    - Fields contain embedded commas, newlines, or other delimiters\n    - Text fields might contain quotation marks as part of the content\n    - Preserving the exact structure of complex text fields is important\n\n**Troubleshooting indicators**\n\nConsider changing this setting when encountering these issues:\n\n- Field counts vary unexpectedly between rows\n- Text with embedded delimiters is being split into multiple fields\n- Quotes appearing at the beginning and end of every field in the result\n- Extra quote characters appearing within field values\n\n**Implementation notes**\n\n- This setting significantly changes parsing behavior - test thoroughly\n- Affects all fields in the file consistently\n- Incorrect setting can cause severe data misalignment\n- When field count inconsistency occurs, review this setting first\n"}}},"json":{"type":"object","description":"Configuration settings for parsing JSON (JavaScript Object Notation) files. This object defines how the system interprets and processes hierarchical data contained in JSON-formatted files.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"json\". This configuration is required for properly parsing:\n- Standard JSON files (.json)\n- JSON data exports from APIs or databases\n- JSON Lines format (newline-delimited JSON)\n- Nested or hierarchical data structures\n\n**Json parsing characteristics**\n\n- **Hierarchical Data**: JSON naturally supports nested objects and arrays\n- **Type Preservation**: Numbers, booleans, nulls, and strings are correctly typed\n- **Flexible Structure**: Can handle varying record structures\n- **Tree Navigation**: Supports complex object traversal via path expressions\n\n**Implementation strategy for ai agents**\n\n1. **Data Structure Analysis**:\n    - Examine sample files to understand the object hierarchy\n    - Identify where the actual records/rows are located in the structure\n    - Determine if records are at the root or nested within containers\n    - Check for array structures that contain the target records\n\n2. **Common JSON Data Patterns**:\n\n    **Root Array Pattern**\n    ```json\n    [\n      {\"id\": 1, \"name\": \"Product 1\"},\n      {\"id\": 2, \"name\": \"Product 2\"}\n    ]\n    ```\n    - Records are directly at the root as an array\n    - No resourcePath needed (leave blank)\n    - Most straightforward structure for processing\n\n    **Container Object Pattern**\n    ```json\n    {\n      \"data\": [\n        {\"id\": 1, \"name\": \"Product 1\"},\n        {\"id\": 2, \"name\": \"Product 2\"}\n      ],\n      \"metadata\": {\n        \"count\": 2,\n        \"page\": 1\n      }\n    }\n    ```\n    - Records are in an array inside a container object\n    - Requires resourcePath (e.g., \"data\")\n    - Common in API responses with metadata\n\n    **Nested Container Pattern**\n    ```json\n    {\n      \"response\": {\n        \"results\": [\n          {\"id\": 1, \"name\": \"Product 1\"},\n          {\"id\": 2, \"name\": \"Product 2\"}\n        ],\n        \"pagination\": {\n          \"nextPage\": 2\n        }\n      },\n      \"status\": \"success\"\n    }\n    ```\n    - Records are deeply nested in the hierarchy\n    - Requires dot notation in resourcePath (e.g., \"response.results\")\n    - Common in complex API responses\n\n**Error prevention**\n\n- **Invalid Path**: Incorrectly specified resourcePath results in zero records found\n- **Type Mismatch**: resourcePath must point to an array of objects for proper record processing\n- **Empty Results**: If path resolves to null or non-existent field, no error is thrown but no records are processed\n- **Parsing Failures**: Malformed JSON will cause the entire file processing to fail\n\n**Optimization opportunities**\n\n- For large JSON files, consider preprocessing to extract only relevant sections\n- For files with complex structures, validate the resourcePath with sample data\n- When processing API responses, coordinate resourcePath with the API documentation\n- For very large datasets, consider using streaming JSON parsing (NDJSON format)\n","properties":{"resourcePath":{"type":"string","description":"Specifies the path to the array of records within the JSON structure. This field helps the system locate and extract the target records when they're nested inside a larger JSON object hierarchy.\n\n**Behavior**\n\n- **Purpose**: Identifies where the array of records is located in the JSON structure\n- **Format**: Dot notation path to navigate nested objects (e.g., \"data.records\")\n- **When Empty**: System expects records to be at the root level as an array\n- **Result**: Array found at this path is processed as individual records\n\n**Path notation guidelines**\n\n**Basic Path Patterns**\n\n- **Root Level Array**: Leave empty or null when records are a direct array at root\n- **Single Level Nesting**: Use the property name (e.g., \"data\", \"results\", \"items\")\n- **Multi-Level Nesting**: Use dot notation (e.g., \"response.data.items\")\n\n**Path Construction Rules**\n\n1. **Object Navigation**:\n    - Use dots to traverse object properties: \"parent.child.grandchild\"\n    - Each segment must be a valid property name in the JSON\n\n2. **Target Requirements**:\n    - The path MUST resolve to an array of objects\n    - Each object in the array will be processed as one record\n    - The array must be the final element in the path\n\n3. **Limitations**:\n    - Array indexing is not supported in the path (e.g., \"data[0]\")\n    - Wildcard selectors are not supported\n    - Regular expressions are not supported\n\n**Determination strategy for ai agents**\n\nTo identify the correct resourcePath:\n\n1. **Examine Sample Data**:\n    - Open a sample JSON file or API response\n    - Locate the array containing the actual data records\n    - Note the full path from root to this array\n\n2. **Common Patterns by Source**:\n\n    | Source Type | Common Paths | Example |\n    |-------------|--------------|---------|\n    | REST APIs | \"data\", \"results\", \"items\" | \"data\" |\n    | Complex APIs | \"response.data\", \"data.items\" | \"response.data\" |\n    | Database Exports | \"rows\", \"records\", \"exports\" | \"rows\" |\n    | CRM Systems | \"contacts\", \"accounts\", \"opportunities\" | \"contacts\" |\n    | Analytics APIs | \"data.rows\", \"response.data.rows\" | \"data.rows\" |\n\n3. **Verification Approach**:\n    - The path should resolve to an array (typically with square brackets in the JSON)\n    - Each element in this array should represent one complete record\n    - The array should not be a property array (like tags or categories)\n\n**Implementation examples**\n\n**Root Array (No Path Needed)**\n\nJSON Structure:\n```json\n[\n  {\"id\": 1, \"name\": \"Record 1\"},\n  {\"id\": 2, \"name\": \"Record 2\"}\n]\n```\n\nConfiguration:\n```json\n\"resourcePath\": \"\"  // or omit entirely\n```\n\n**Single-Level Nesting**\n\nJSON Structure:\n```json\n{\n  \"orders\": [\n    {\"id\": \"A001\", \"customer\": \"John\"},\n    {\"id\": \"A002\", \"customer\": \"Jane\"}\n  ],\n  \"count\": 2\n}\n```\n\nConfiguration:\n```json\n\"resourcePath\": \"orders\"\n```\n\n**Multi-Level Nesting**\n\nJSON Structure:\n```json\n{\n  \"response\": {\n    \"data\": {\n      \"customers\": [\n        {\"id\": 1, \"name\": \"Acme Corp\"},\n        {\"id\": 2, \"name\": \"Globex Inc\"}\n      ]\n    },\n    \"status\": \"success\"\n  }\n}\n```\n\nConfiguration:\n```json\n\"resourcePath\": \"response.data.customers\"\n```\n\n**Troubleshooting indicators**\n\nIf you encounter these issues, review the resourcePath setting:\n\n- Export completes successfully but processes 0 records\n- \"Cannot read property 'forEach' of undefined\" errors\n- \"Expected array but got object/string/number\" errors\n- Records appear flattened or with unexpected structure\n\n**Best practices**\n\n- Always verify the path with sample data before deployment\n- Use the simplest path that reaches the target array\n- Document the expected JSON structure alongside the configuration\n- For APIs with changing response structures, implement validation checks\n"}}},"xlsx":{"type":"object","description":"Configuration settings for parsing Microsoft Excel (XLSX) files. This object defines how the system interprets and extracts data from Excel workbooks, handling their unique structures and formatting.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"xlsx\". This configuration is required for properly parsing:\n- Modern Excel files (.xlsx) using the Open XML format\n- Excel workbooks with multiple sheets\n- Files exported from Microsoft Excel or compatible applications\n- Spreadsheet data with formatting, formulas, or multiple worksheets\n\n**Excel parsing characteristics**\n\n- **Multiple Worksheets**: Can access data from specific sheets within workbooks\n- **Cell Formatting**: Handles various data types (text, numbers, dates, etc.)\n- **Formula Resolution**: Retrieves calculated values rather than formulas\n- **Data Extraction**: Converts tabular Excel data to structured records\n\n**Implementation strategy for ai agents**\n\n1. **File Analysis**:\n    - Determine if the source file is an actual .xlsx format (not .xls, .csv, etc.)\n    - Identify which worksheet contains the target data\n    - Check for header rows, merged cells, or other special formatting\n    - Note any preprocessing required (hidden rows, filtered data, etc.)\n\n2. **Common Excel File Patterns**:\n\n    **Standard Data Table**\n    - Data organized in clear rows and columns\n    - First row contains headers\n    - No merged cells or complex formatting\n    - Most straightforward to process\n\n    **Report-Style Workbook**\n    - Contains titles, headers, and possibly footers\n    - May have merged cells for headings\n    - Could have multiple tables on a single sheet\n    - May require specific sheet selection or row skipping\n\n    **Multi-Sheet Workbook**\n    - Data distributed across multiple worksheets\n    - May require multiple export configurations\n    - Often needs sheet name specification (via pre-processing)\n    - Common in financial or complex business reports\n\n**Limitations and considerations**\n\n- **Hidden Data**: Hidden rows/columns are still processed unless filtered\n- **Formatting Loss**: Visual formatting and styles are ignored\n- **Formula Handling**: Only calculated values are extracted, not formulas\n- **Non-Tabular Data**: Pivot tables and non-tabular layouts may cause issues\n- **Large Files**: Very large Excel files may require additional memory\n\n**Error prevention**\n\n- **Format Compatibility**: Ensure the file is modern .xlsx format, not legacy .xls\n- **Data Structure**: Verify data is in a consistent tabular format\n- **Special Characters**: Watch for special characters in header rows\n- **Empty Sheets**: Check that target worksheets contain actual data\n\n**Optimization opportunities**\n\n- For complex workbooks, consider pre-processing to simplify structure\n- For large files, extract only necessary worksheets/ranges before processing\n- When possible, use files with consistent tabular layouts\n- Consider converting Excel data to CSV format for simpler processing\n","properties":{"hasHeaderRow":{"type":"boolean","description":"Indicates whether the Excel file contains a header row with field names as the first row.\n\n**Behavior**\n\n- **When true** (default): First row is treated as field names, not data\n- **When false**: All rows including the first are treated as data records\n- Impacts field mapping, validation, and record counting\n\n**Implementation impact**\n\n**With Header Row (true)**\n\n- Field names from the header row can be referenced in mappings\n- Record count excludes the header row\n- First row of data is the second physical row in the spreadsheet\n- Column names are derived from the first row text values\n- Blank header cells may be auto-named (Column1, Column2, etc.)\n\n**Without Header Row (false)**\n\n- Fields are referenced by position/Excel column letters (A, B, C, etc.)\n- Record count includes all rows in the sheet\n- First row of data is the first physical row in the spreadsheet\n- Requires external schema or position-based mapping\n- All fields are given generic names (Column1, Column2, etc.)\n\n**Determination strategy for ai agents**\n\nTo determine if a header row exists and should be configured:\n\n1. **Visual Inspection**:\n    - Open the Excel file and examine the first row\n    - Look for descriptive labels rather than actual data values\n    - Check for formatting differences between the first row and others\n    - Header rows often use bold formatting or different background colors\n\n2. **Content Analysis**:\n    - Headers typically contain text while data rows may contain mixed types\n    - Headers often use naming conventions (camelCase, Title Case, etc.)\n    - Headers don't follow the pattern/format of subsequent data rows\n    - Headers rarely contain numeric-only values (unless they're codes)\n\n3. **Source Context**:\n    - Business reports almost always include headers\n    - Data exports from systems typically include column names\n    - Machine-generated data might skip headers\n    - Scientific or technical data sometimes omits headers\n\n**Usage guidance for ai agents**\n\n**Recommend `hasHeaderRow: true` when**\n\n1. **Standard Business Data**:\n    - Most business Excel files include headers\n    - Reports and exports from business systems\n    - Files intended for human readability\n    - When column names provide important context\n\n2. **Integration Requirements**:\n    - When field names are needed for mapping\n    - When data needs to be self-describing\n    - When header names match target system fields\n    - For maintaining field identity across systems\n\n**Recommend `hasHeaderRow: false` when**\n\n1. **Special Data Types**:\n    - Scientific or sensor data without labels\n    - Machine-generated output files\n    - Legacy system exports with position-based fields\n    - When all rows contain actual data values\n\n2. **Technical Scenarios**:\n    - When the first row contains required data\n    - When column positions are used for mapping\n    - When headers are inconsistent or misleading\n    - For maximum data extraction with minimal configuration\n\n**Implementation notes**\n\n- This setting affects all worksheets in multi-sheet processing\n- Excel column names with spaces or special characters may be normalized\n- Duplicate header names will be made unique with suffixes\n- Empty header cells will get automatically generated names\n- Maximum recommended header length: 64 characters\n- Consider pre-processing files without headers to add them for clarity\n"}}},"xml":{"type":"object","description":"Configuration settings for parsing XML (Extensible Markup Language) files. This object defines how the system navigates and extracts hierarchical data from XML documents, enabling processing of structured markup data.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"xml\". This configuration is required for properly parsing:\n- Standard XML files (.xml)\n- SOAP API responses and web service outputs\n- Industry-specific XML formats (EDI, NIEM, UBL, etc.)\n- Document-oriented data with hierarchical structure\n\n**Xml parsing characteristics**\n\n- **Hierarchical Structure**: Processes nested elements and attributes\n- **Schema Independence**: Works with or without formal XML schemas\n- **Node Selection**: Uses XPath to precisely target record elements\n- **Namespace Support**: Handles XML namespaces in complex documents\n\n**Implementation strategy for ai agents**\n\n1. **Document Analysis**:\n    - Examine the XML structure to identify repeating elements (records)\n    - Determine the hierarchical level where target records exist\n    - Identify any namespaces that must be addressed\n    - Note attributes vs. element content patterns\n\n2. **Common XML Data Patterns**:\n\n    **Simple Element List**\n    ```xml\n    <Records>\n      <Record id=\"1\">\n        <Name>Product 1</Name>\n        <Price>10.99</Price>\n      </Record>\n      <Record id=\"2\">\n        <Name>Product 2</Name>\n        <Price>20.99</Price>\n      </Record>\n    </Records>\n    ```\n    - Records are identical element types with similar structure\n    - Direct children of a container element\n    - XPath: `/Records/Record`\n\n    **Namespaced xml**\n    ```xml\n    <soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\">\n      <soap:Body>\n        <ns:GetCustomersResponse xmlns:ns=\"http://example.com/api\">\n          <ns:Customer id=\"1\">\n            <ns:Name>Acme Corp</ns:Name>\n          </ns:Customer>\n          <ns:Customer id=\"2\">\n            <ns:Name>Globex Inc</ns:Name>\n          </ns:Customer>\n        </ns:GetCustomersResponse>\n      </soap:Body>\n    </soap:Envelope>\n    ```\n    - Elements use XML namespaces\n    - Records are nested within service response structures\n    - XPath: `//ns:Customer` or `/soap:Envelope/soap:Body/ns:GetCustomersResponse/ns:Customer`\n\n    **Heterogeneous Records**\n    ```xml\n    <Feed>\n      <Entry type=\"product\">\n        <ProductId>123</ProductId>\n        <Name>Widget</Name>\n      </Entry>\n      <Entry type=\"category\">\n        <CategoryId>A5</CategoryId>\n        <Label>Supplies</Label>\n      </Entry>\n    </Feed>\n    ```\n    - Same element type may have different internal structures\n    - Usually identified by an attribute or child element type\n    - May require multiple export configurations\n    - XPath: `/Feed/Entry[@type=\"product\"]`\n\n**Xpath query formulation**\n\nXPath is a powerful language for selecting nodes in XML documents. When formulating a resourcePath:\n\n- **Absolute Paths** (starting with `/`): Select from the document root\n- **Relative Paths** (no leading `/`): Select from the current context\n- **Any-Level Selection** (`//`): Select matching nodes regardless of location\n- **Predicates** (`[]`): Filter elements based on attributes or content\n- **Attribute Selection** (`@`): Select attribute values instead of elements\n\n**Error prevention**\n\n- **Invalid XPath**: Test the resourcePath against sample data before deployment\n- **Namespace Issues**: Ensure proper namespace handling in complex documents\n- **Empty Results**: Verify that the XPath selects the intended nodes and not an empty set\n- **Encoding Problems**: Use the correct encoding setting for international content\n\n**Optimization opportunities**\n\n- For large XML files, use more specific XPaths to reduce processing overhead\n- For complex structures, consider preprocessing to simplify before parsing\n- For SOAP responses, extract just the response body before processing\n- For repeating integration, document the exact XPath with examples\n","properties":{"resourcePath":{"type":"string","description":"Specifies the XPath expression used to locate record elements within the XML document. This critical field determines which XML nodes are treated as individual records for processing.\n\n**Behavior**\n\n- **Purpose**: Identifies which elements in the XML represent individual records\n- **Format**: Uses XPath syntax to select nodes from the document structure\n- **Requirement**: MANDATORY for XML processing - no default value exists\n- **Result**: Each XML element matching the XPath is processed as one record\n\n**Xpath syntax guidance**\n\n**Core XPath Patterns**\n\n1. **Direct Child Selection** (`/Root/Element`):\n    ```xml\n    <Root>\n      <Element>Record 1</Element>\n      <Element>Record 2</Element>\n    </Root>\n    ```\n    - XPath: `/Root/Element`\n    - Selects elements that are direct children following exact path\n    - Most precise, requires exact hierarchy knowledge\n    - Recommended when structure is consistent and well-known\n\n2. **Any-Level Selection** (`//Element`):\n    ```xml\n    <Root>\n      <Section>\n        <Element>Record 1</Element>\n      </Section>\n      <Container>\n        <Element>Record 2</Element>\n      </Container>\n    </Root>\n    ```\n    - XPath: `//Element`\n    - Selects all matching elements regardless of location\n    - More flexible, works across varying structures\n    - Use when element hierarchy may vary or is unknown\n\n3. **Filtered Selection** (`//Element[@attr=\"value\"]`):\n    ```xml\n    <Root>\n      <Element type=\"product\">Record 1</Element>\n      <Element type=\"category\">Not a record</Element>\n      <Element type=\"product\">Record 2</Element>\n    </Root>\n    ```\n    - XPath: `//Element[@type=\"product\"]`\n    - Selects only elements matching both name and attribute criteria\n    - Precise targeting when elements have identifying attributes\n    - Useful for heterogeneous XML with type indicators\n\n**Advanced Selection Techniques**\n\n1. **Position-Based** (`/Root/Element[1]`):\n    - Selects first element only\n    - Use when only certain occurrences should be processed\n\n2. **Content-Based** (`//Element[contains(text(),\"Value\")]`):\n    - Selects elements containing specific text\n    - Useful for filtering based on content\n\n3. **Parent-Relative** (`//Parent[Child=\"Value\"]/Element`):\n    - Selects elements with specific sibling or parent conditions\n    - Powerful for complex structural conditions\n\n**Namespace handling**\n\nWhen working with namespaced XML:\n\n```xml\n<soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"\n              xmlns:ns=\"http://example.com/api\">\n  <soap:Body>\n    <ns:Response>\n      <ns:Customer>Record 1</ns:Customer>\n      <ns:Customer>Record 2</ns:Customer>\n    </ns:Response>\n  </soap:Body>\n</soap:Envelope>\n```\n\nThe system automatically handles namespaces, but for clarity and precision:\n\n1. **Namespace-Aware Path**:\n    - XPath: `/soap:Envelope/soap:Body/ns:Response/ns:Customer`\n    - Include namespace prefixes as they appear in the document\n\n2. **Namespace-Agnostic Path**:\n    - XPath: `//Customer` or `//*[local-name()=\"Customer\"]`\n    - Use when you want to ignore namespaces entirely\n\n**Determination strategy for ai agents**\n\n1. **Identify Record Elements**:\n    - Look for repeating elements that represent individual \"rows\" of data\n    - These elements typically have the same name and similar structure\n    - They often contain multiple child elements representing \"fields\"\n\n2. **Analyze Element Hierarchy**:\n    - Note the path from root to record elements\n    - Determine if records appear at consistent locations or vary\n    - Check if they need to be filtered by attributes or position\n\n3. **Test Path Specificity**:\n    - More specific paths reduce processing overhead but are less flexible\n    - More general paths (with `//`) are robust to structure changes but less efficient\n    - Balance specificity with flexibility based on source stability\n\n**Common xpath patterns by source**\n\n| Source Type | Common XPath Pattern | Example |\n|-------------|----------------------|---------|\n| SOAP APIs | `/Envelope/Body/*/Response/*` | `/soap:Envelope/soap:Body/ns:GetOrdersResponse/ns:Order` |\n| REST XML | `/Response/Results/*` | `/ApiResponse/Results/Customer` |\n| Feeds | `/Feed/Entry` or `/Feed/Item` | `/rss/channel/item` |\n| Documents | `//Section/Item` | `//Chapter/Paragraph` |\n| EDI/Business | `/Document/Transaction/Line` | `/Invoice/LineItems/Item` |\n\n**Troubleshooting indicators**\n\nIf you encounter these issues, review the resourcePath:\n\n- Export completes successfully but processes 0 records\n- Records contain unexpected or partial data\n- Only first level of data is extracted (missing nested content)\n- Namespace-related \"element not found\" errors\n\n**Implementation notes**\n\n- XPath is case-sensitive; element and attribute names must match exactly\n- Each matching element becomes a separate record for processing\n- Child elements become fields in the processed record\n- Attributes can be included in field data if needed\n- Namespaces are handled automatically but may require explicit prefixes\n- Testing with an XPath tool on sample data is highly recommended\n"}}},"fileDefinition":{"type":"object","description":"Configuration settings for parsing files using a predefined file definition. This object enables processing of complex, non-standard, or proprietary file formats that require specialized parsing logic beyond what the standard parsers (CSV, JSON, XML, etc.) can handle.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"filedefinition\". This approach is required for properly handling:\n- Legacy or proprietary file formats with complex structures\n- Fixed-width text files where field positions are defined by character positions\n- Electronic Data Interchange (EDI) documents (X12, EDIFACT, etc.)\n- Multi-record type files where different lines have different formats\n- Files requiring complex preprocessing or custom parsing logic\n\n**File definition characteristics**\n\n- **Custom Parsing Rules**: Applies predefined parsing logic to complex file formats\n- **Reusable Configurations**: References externally defined parsing rules that can be reused\n- **Complex Format Support**: Handles formats that standard parsers cannot process\n- **Specialized Processing**: Often used for industry-specific or legacy formats\n\n**Implementation strategy for ai agents**\n\n1. **Format Analysis**:\n    - Determine if the file format is standard (CSV, JSON, XML) or requires custom parsing\n    - Check if the format follows industry standards like EDI, SWIFT, or fixed-width\n    - Assess if there are multiple record types within the same file\n    - Identify if specialized logic is needed to interpret the file structure\n\n2. **File Definition Selection**:\n    - Verify that a suitable file definition has already been created in the system\n    - Check if existing file definitions match the format requirements\n    - Confirm the file definition ID from system administrators if needed\n    - Ensure the file definition is compatible with the export's needs\n\n**Use case scenarios**\n\n**Fixed-Width Files**\n\nFiles where each field has a specific starting position and length:\n```\nCUST00001JOHN     DOE       123 MAIN ST\nCUST00002JANE     SMITH     456 OAK AVE\n```\n- Fields are positioned by character count rather than delimiters\n- Requires precise position and length definitions\n- Common in legacy mainframe and banking systems\n\n**Edi Documents**\n\nElectronic Data Interchange formats for business transactions:\n```\nISA*00*          *00*          *ZZ*SENDER         *ZZ*RECEIVER       *...\nGS*PO*SENDER*RECEIVER*20210101*1200*1*X*004010\nST*850*0001\nBEG*00*SA*123456**20210101\n...\n```\n- Highly structured with segment identifiers and element separators\n- Contains multiple record types with different structures\n- Requires complex parsing rules and validation\n\n**Multi-Record Files**\n\nFiles containing different record types identified by indicators:\n```\nH|SHIPMENT|20210115|PRIORITY\nD|ITEM001|5|WIDGET|RED\nD|ITEM002|10|GADGET|BLUE\nT|2|15|COMPLETE\n```\n- Each line starts with a record type indicator\n- Different record types have different field structures\n- Requires conditional processing based on record type\n\n**Error prevention**\n\n- **Definition Mismatch**: Ensure the file definition matches the actual file format\n- **Missing Definition**: Verify the file definition exists before referencing it\n- **Access Issues**: Confirm the integration has permission to use the file definition\n- **Version Compatibility**: Check if file definition version matches current file format\n\n**Optimization opportunities**\n\n- Document which file definition is used and why it's appropriate for the file format\n- Consider creating purpose-specific file definitions for complex formats\n- Test file definitions with sample files before deploying in production\n- Maintain documentation of the file structure alongside the file definition reference\n","properties":{"_fileDefinitionId":{"type":"string","format":"objectId","description":"The unique identifier of the file definition to use for parsing the file. This ID references a preconfigured file definition resource that contains the detailed parsing instructions for a specific file format.\n\n**Field behavior**\n\n- **Purpose**: References an existing file definition resource in the system\n- **Format**: MongoDB ObjectId (24-character hexadecimal string)\n- **Requirement**: MANDATORY when type=\"filedefinition\"\n- **Validation**: Must reference a valid, accessible file definition\n\n**Understanding file definitions**\n\nA file definition is a separate resource that defines:\n\n1. **Record Structure**:\n    - Field names, positions, and data types\n    - Record identifiers and format specifications\n    - Parsing rules and field extraction logic\n\n2. **Processing Rules**:\n    - How to identify different record types\n    - How to handle headers, footers, and details\n    - Data validation and transformation rules\n\n3. **Format-Specific Settings**:\n    - For fixed-width: Character positions and field lengths\n    - For EDI: Segment identifiers and element separators\n    - For proprietary formats: Custom parsing instructions\n\n**Obtaining the correct id**\n\nTo identify the appropriate file definition ID:\n\n1. **System Administration**:\n    - Check with system administrators for a list of available file definitions\n    - Request the specific ID for the file format you need to process\n    - Verify the file definition's compatibility with your file format\n\n2. **File Definition Catalog**:\n    - If available, consult the file definition catalog in the system\n    - Search for definitions matching your file format requirements\n    - Note the ObjectId of the appropriate definition\n\n3. **Custom Definition Creation**:\n    - If no suitable definition exists, request creation of a new one\n    - Provide sample files and format specifications\n    - Obtain the new file definition's ID after creation\n\n**Implementation guidance for ai agents**\n\n**Recommendation Framework**\n\nWhen implementing a file definition-based export:\n\n1. **Verify Definition Existence**:\n    - Confirm the file definition exists before configuration\n    - Do not guess or generate random IDs\n    - Request specific ID from system administrators\n\n2. **Documentation Requirements**:\n    - Document which file definition is being used and why\n    - Note any specific requirements or limitations of the definition\n    - Record the mapping between file fields and integration needs\n\n3. **Testing Approach**:\n    - Recommend testing with sample files before production use\n    - Verify all required fields are correctly extracted\n    - Validate the parsing results meet integration requirements\n\n**Common File Definition Categories**\n\n| Category | Description | Example Formats |\n|----------|-------------|----------------|\n| Fixed-Width | Fields defined by character positions | Banking transactions, government reports |\n| EDI | Electronic Data Interchange standards | X12, EDIFACT, TRADACOMS |\n| Hierarchical | Complex parent-child structures | Specialized industry formats |\n| Multi-Record | Different record types in one file | Inventory systems, financial exports |\n| Proprietary | Custom or legacy system formats | Mainframe exports, specialized software |\n\n**Technical considerations**\n\n- File definitions are reusable across multiple exports\n- Changes to a file definition affect all exports using it\n- File definitions may have version dependencies\n- Some file definitions may require specific pre-processing settings\n- Performance impact varies based on definition complexity\n\n**Troubleshooting indicators**\n\nIf you encounter these issues, verify the file definition ID:\n\n- \"File definition not found\" errors\n- Unexpected field mapping or missing fields\n- Data type conversion errors\n- Parsing failures with specific record types\n\nAlways document the exact file definition ID with its purpose to facilitate troubleshooting and maintenance.\n"}}},"filter":{"allOf":[{"description":"Configuration for selectively processing files based on specified criteria. This object enables precise\ncontrol over which files are included or excluded from the export operation.\n\n**Filter behavior**\n\nWhen configured, the filter is applied before file processing begins:\n- Files that match the filter criteria are processed\n- Files that don't match are completely skipped\n- No partial file processing is performed\n\n**Available filter fields**\n\nThe specific fields available for file filtering are the contained in the `fileMeta` property.\n\n**Common Filter Fields**\n\nThese are the most commonly available fields across most file providers:\n\n1. **filename**: The name of the file (with extension)\n  - Example filter: Match files with specific extensions or naming patterns\n  - Usage: `[\"endswith\", [\"extract\", \"filename\"], \".csv\"]`\n\n2. **filesize**: The size of the file in bytes\n  - Example filter: Skip files that are too large or too small\n  - Usage: `[\"lessthan\", [\"number\", [\"extract\", \"filesize\"]], 1000000]`\n\n3. **lastmodified**: The last modification timestamp of the file\n  - Example filter: Process only files created/modified within a specific date range\n  - Usage: `[\"greaterthan\", [\"extract\", \"lastmodified\"], \"2023-01-01T00:00:00Z\"]`\n"},{"$ref":"#/components/schemas/Filter"}]},"backupPath":{"type":"string","description":"The file system path where backup files will be stored before processing. This path specifies a directory location where the system will create backup copies of files before they are processed by the export flow.\n\n**Backup mechanism overview**\n\nThe backup mechanism creates a copy of source files in the specified location before processing begins. This provides:\n\n- **Data Safety**: Preserves original files in case of processing errors\n- **Audit Trail**: Maintains historical record of exported data\n- **Recovery Option**: Enables reprocessing from original files if needed\n- **Compliance Support**: Helps meet data retention requirements\n\n**Path configuration guidelines**\n\nThe path format must follow these conventions:\n\n- **Absolute Paths**: Must start with \"/\" (Unix/Linux) or include drive letter (Windows)\n- **Relative Paths**: Interpreted relative to the application's working directory\n- **Network Paths**: Can use UNC format (\\\\server\\share\\path) or mounted network drives\n- **Access Requirements**: The path must be writable by the service account running the integration\n\n**Implementation strategy for ai agents**\n\nWhen configuring the backup path, consider these factors:\n\n1. **Storage Capacity Planning**:\n    - Estimate average file sizes and volumes\n    - Calculate required storage based on retention period\n    - Implement monitoring for storage utilization\n    - Plan for storage growth based on business projections\n\n2. **Path Selection Criteria**:\n    - Choose locations with sufficient disk space\n    - Ensure appropriate read/write permissions\n    - Select paths with reliable access (avoid temporary or volatile storage)\n    - Consider network latency for remote locations\n\n3. **Backup Naming Convention**:\n    - Default: Original filename with timestamp suffix\n    - Custom: Can be controlled through integration settings\n    - Avoid paths that may contain special characters that need escaping\n    - Consider filename length limitations of target filesystem\n\n4. **Security Considerations**:\n    - Restrict access to backup location to authorized personnel only\n    - Avoid public-facing directories\n    - Consider encryption for sensitive data backups\n    - Implement appropriate file permissions\n\n**Backup strategy recommendations**\n\n| Data Sensitivity | Recommended Approach | Path Considerations |\n|------------------|----------------------|---------------------|\n| Low | Local directory backup | Fast access, limited protection |\n| Medium | Network share with permissions | Balanced access/protection |\n| High | Secure storage with encryption | Highest protection, potential performance impact |\n| Regulated | Compliant storage with audit trail | Must meet specific regulatory requirements |\n\n**Integration patterns**\n\n**Temporary Processing Pattern**\n\nFor short-term processing needs:\n```\n/tmp/exports/backups\n```\n- Files stored temporarily during processing\n- Limited retention period\n- Optimized for processing speed\n- May be automatically cleaned up\n\n**Long-term Archival Pattern**\n\nFor regulatory or business retention requirements:\n```\n/archive/exports/2023/Q4\n```\n- Organized by time period\n- Structured for easy retrieval\n- May include additional metadata\n- Designed for long-term storage\n\n**Cloud Storage Pattern**\n\nFor scalable, managed storage:\n```\n/mnt/cloud/exports/client123\n```\n- Mounted cloud storage location\n- Potentially unlimited capacity\n- May include built-in versioning\n- Often includes automatic replication\n\n**Error handling guidance**\n\nWhen configuring backup paths, anticipate these common issues:\n\n- **Permission Denied**: Ensure service account has write access\n- **Path Not Found**: Verify directory exists or create it programmatically\n- **Disk Full**: Monitor storage capacity and implement alerts\n- **Path Too Long**: Be aware of filesystem path length limitations\n\n**Technical considerations**\n\n- Backup operations may impact performance for large files\n- Network paths may introduce latency and availability concerns\n- Some filesystems have case sensitivity differences (important for path matching)\n- Path separators vary by platform (/ vs \\)\n- Special characters in paths may require escaping in certain contexts\n- Consider implementing automatic cleanup policies for backups\n\n**System administration notes**\n\n- Backup paths should be included in system backup procedures\n- Monitor space utilization on backup volumes\n- Implement appropriate retention policies\n- Document backup path locations in system configuration\n- Consider periodic validation of backup file integrity\n"}}},"Filter":{"type":"object","description":"Configuration for selectively processing records based on specified criteria. This object enables\nprecise control over which items are included or excluded from processing operations.\n\n**Filter behavior**\n\nWhen configured, the filter is applied before processing begins:\n- Items that match the filter criteria are processed\n- Items that don't match are completely skipped\n- No partial processing is performed\n\n**Implementation approaches**\n\nThere are two distinct filtering mechanisms available:\n\n**Rule-Based Filtering (`type: \"expression\"`)**\n- **Best For**: Common filtering patterns based on standard attributes\n- **Capabilities**: Filter by names, values, dates, numerical ranges, text patterns\n- **Advantages**: Declarative, no coding required, consistent performance\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear, static criteria for selection\n\n**Script-Based Filtering (`type: \"script\"`)**\n- **Best For**: Complex logic, dynamic criteria, or business rules\n- **Capabilities**: Full programmatic control, access to complete metadata\n- **Advantages**: Maximum flexibility, can implement any filtering logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Simple rules aren't sufficient or logic needs to be dynamic\n","properties":{"type":{"type":"string","description":"Determines which filtering mechanism to use. This choice affects which properties\nmust be configured and how filtering logic is implemented.\n\n**Available types**\n\n**Rule-Based Filtering (`\"expression\"`)**\n- **Required Config**: The `expression` object with rule definitions\n- **Behavior**: Evaluates declarative rules against item attributes\n- **Best For**: Common patterns like name matching, date ranges, value limits\n- **Advantages**: Simpler to configure, no custom code required\n\n**Script-Based Filtering (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to determine which items to process\n- **Best For**: Complex conditions, business logic, dynamic criteria\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard filtering needs (name, size, date), use `\"expression\"`\n2. For complex logic or conditions not covered by expressions, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based filtering. This object enables filtering\nitems based on common attributes without requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define filtering rules that\ncan match against item attributes like name, type, value, date, and other properties.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Rules can be combined with AND/OR logic\n- Each rule can check a specific attribute\n- Multiple conditions can be applied (ranges, pattern matching, exact matches)\n\n**Common filter patterns**\n\n1. **Pattern matching**: Using wildcards like `*` and `?`\n2. **Value range filtering**: Numbers between min and max values\n3. **Date range filtering**: Items created/modified within specific time ranges\n4. **Status checking**: Items with specific status values or properties\n\nFor AI agents: Rule-based filtering should be your first choice when the filtering criteria\ncan be expressed in terms of standard attributes. Only use script-based filtering when\nmore complex logic is required.\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"1\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"1\"\nfor current implementations.\n","enum":["1"]},"rules":{"type":"array","description":"Expression array defining filter conditions using prefix notation. The first element is the operator,\nfollowed by its operands which may themselves be nested expression arrays.\n\nThe rule expression follows this pattern:\n- First element: Operator name (string)\n- Remaining elements: Operands for that operator (values or nested expressions)\n\n**Expression structure**\n\nFilter expressions use a prefix notation where operators appear before their operands:\n```\n[operator, operand1, operand2, ...]\n```\n\n**Comparison Operators**\n- `\"equals\"`: Exact match (equals)\n- `\"notequals\"`: Not equal to value (not equals)\n- `\"greaterthan\"`: Value is greater than specified value (is greater than)\n- `\"greaterthanequals\"`: Value is greater than or equal to specified value (is greater than or equals)\n- `\"lessthan\"`: Value is less than specified value (is less than)\n- `\"lessthanequals\"`: Value is less than or equal to specified value (is less than or equals)\n- `\"startswith\"`: String starts with specified prefix (starts with)\n- `\"endswith\"`: String ends with specified suffix (ends with)\n- `\"contains\"`: String contains specified substring (contains)\n- `\"doesnotcontain\"`: String does not contain specified substring (does not contain)\n- `\"isempty\"`: Field is empty or null (is empty)\n- `\"isnotempty\"`: Field contains a value (is not empty)\n- `\"matches\"`: Matches specified pattern (matches)\n\n**Logical Operators**\n- `\"and\"`: All conditions must be true\n- `\"or\"`: At least one condition must be true\n- `\"not\"`: Negates the condition\n\n**Field Access and Type Conversion**\n- `\"extract\"`: Access a field from the item by name\n- `\"settings\"`: Access a custom setting from the flow, flow step, or integration configuration\n- `\"boolean\"`: Convert value to Boolean type\n- `\"epochtime\"`: Convert value to Epoch Time (Unix timestamp)\n- `\"number\"`: Convert value to Number type\n- `\"string\"`: Convert value to String type\n\n**Field Access Details**\n\n**Using `extract` to access record fields:**\n- Retrieves values from the current record being processed\n- Can access nested properties using dot notation (e.g., `\"customer.email\"`)\n- Returns the raw field value which may need type conversion\n\n**Using `settings` to access configuration values:**\n- Retrieves values from the integration's configuration settings\n- Supports different scopes with prefix notation:\n  - `flow.settingName`: Access flow-level settings\n  - `export.settingName`: Access export-level settings\n  - `import.settingName`: Access import-level settings\n  - `integration.settingName`: Access integration-level settings\n- Useful for dynamic filtering based on configuration\n\n**Field Transformations**\n- `\"lowercase\"`: Convert string to lowercase\n- `\"uppercase\"`: Convert string to uppercase\n- `\"ceiling\"`: Round number up to the nearest integer\n- `\"floor\"`: Round number down to the nearest integer\n- `\"abs\"`: Get absolute value of a number\n\nType conversion operators are often necessary when comparing extracted field values against literals or when the field type doesn't match the comparison operator's expected type. For example:\n\n```json\n[\n  \"equals\",\n  [\n    \"number\",  // Convert to number before comparison\n    [\n      \"extract\",\n      \"quantity\"\n    ]\n  ],\n  100\n]\n```\n\nExample with datetime conversion:\n```json\n[\n  \"greaterthan\",\n  [\n    \"epochtime\",  // Convert to Unix timestamp before comparison\n    [\n      \"extract\",\n      \"createdDate\"\n    ]\n  ],\n  1609459200000  // January 1, 2021 as Unix timestamp in milliseconds\n]\n```\n\nExample with transformations:\n```json\n[\n  \"and\",\n  [\n    \"matches\",\n    [\n      \"lowercase\",  // Convert to lowercase before matching\n      [\n        \"string\",\n        [\n          \"extract\",\n          \"categories\"\n        ]\n      ]\n    ],\n    \"netsuite\"\n  ],\n  [\n    \"notequals\",\n    [\n      \"string\",\n      [\n        \"extract\",\n        \"recurrence.pattern.type\"\n      ]\n    ],\n    \"\"\n  ]\n]\n```\n\nExample comparing a record field with a flow setting:\n```json\n[\n  \"equals\",\n  [\n    \"string\",\n    [\n      \"extract\",\n      \"trantype\"\n    ]\n  ],\n  [\n    \"string\",\n    [\n      \"settings\",\n      \"flow.trantype\"\n    ]\n  ]\n]\n```\n\n**Examples**\n\nExample 1: Status field is not equal to \"cancelled\"\n```json\n[\n  \"notequals\",\n  [\n    \"extract\",\n    \"status\"\n  ],\n  \"cancelled\"\n]\n```\n\nExample 2: Filename starts with \"HC\"\n```json\n[\n  \"startswith\",\n  [\n    \"extract\",\n    \"filename\"\n  ],\n  \"HC\"\n]\n```\n\nExample 3: Amount is greater than 100\n```json\n[\n  \"greaterthan\",\n  [\n    \"number\",\n    [\n      \"extract\",\n      \"amount\"\n    ]\n  ],\n  100\n]\n```\n\nExample 4: Order date is after January 1, 2023\n```json\n[\n  \"greaterthan\",\n  [\n    \"extract\",\n    \"orderDate\"\n  ],\n  \"2023-01-01T00:00:00Z\"\n]\n```\n\nExample 5: Category contains any of [\"Urgent\", \"High Priority\"]\n```json\n[\n  \"anyof\",\n  [\n    \"extract\",\n    \"category\"\n  ],\n  [\"Urgent\", \"High Priority\"]\n]\n```\n","items":{"oneOf":[{"title":"String","type":"string"},{"title":"Number","type":"number"},{"title":"Boolean","type":"boolean"},{"title":"Object","type":"object"},{"title":"Array","type":"array"}]}}}},"script":{"type":"object","description":"Configuration for programmable script-based filtering. This object enables complex, custom\nfiltering logic beyond what expression-based filtering can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to determine which items\nshould be processed.\n\n**Implementation approach**\n\nScript-based filtering works by:\n1. Executing the specified function from the referenced script\n2. Passing item data to the function\n3. Using the function's return value (true/false) to determine inclusion\n\n**Common use cases**\n\nScript filtering is ideal for:\n- Complex business logic that can't be expressed as simple rules\n- Dynamic filtering criteria that change based on external factors\n- Content-based filtering that requires deep inspection\n- Advanced pattern matching beyond simple wildcards\n- Multi-stage filtering with intermediate logic\n\nFor AI agents: Only use script-based filtering when expression-based filtering is insufficient.\nScript filtering requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to the Script resource that contains the filtering logic. This must be a valid\nObjectId of a Script resource that exists in the system.\n\nThe referenced script must contain the function specified in the `function` field\nand must be written to handle filtering specifically. The script receives\nitem data as its input and must return a boolean value indicating whether\nto process the item (true) or skip it (false).\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n"},"function":{"type":"string","description":"Name of the function within the script to execute for filtering decisions. This function\nmust exist in the script referenced by _scriptId.\n\n**Function requirements**\n\nThe specified function must:\n- Accept item data as its first parameter\n- Return a boolean value (true to process the item, false to skip it)\n- Handle errors gracefully\n- Execute efficiently (as it may run for many items)\n\n**Function signature**\n\n```javascript\nfunction filterItems(itemData) {\n  // itemData contains properties of the item being evaluated\n  // Custom logic here\n  return true; // or false to skip the item\n}\n```\n\nFor AI agents: Ensure the function name exactly matches a function defined in the\nreferenced script, as mismatches will cause the filter to fail.\n"}}}}},"Http":{"type":"object","description":"Configuration for HTTP exports.\n\nIMPORTANT: When the _connectionId field points to a connection where the type is http, \nthis object MUST be populated for the export to function properly. This is a required configuration\nfor all HTTP based exports, as determined by the connection associated with the export.\n","properties":{"type":{"type":"string","enum":["file"],"description":"**Important:** This field should be LEFT UNDEFINED for the vast majority of HTTP exports.\n\nThis is an OPTIONAL field that should only be set in rare, specific cases. For standard REST API exports\n(Shopify, Salesforce, NetSuite, custom REST APIs, etc.), this field MUST be left undefined.\n\n**When to leave this field undefined (MOST common CASE)**\n\nLeave this field undefined for ALL standard data exports, including:\n- REST API exports that return JSON records\n- APIs that return XML records or structured data\n- Any export that retrieves business records, entities, or data objects\n- Standard CRUD operations that return record collections\n- GraphQL queries that return structured data\n- SOAP APIs that return structured responses\n\nExamples of exports that should have this field undefined:\n- \"Export all Shopify Customers\" → undefined (returns JSON customer records)\n- \"Retrieve orders from custom REST API\" → undefined (returns JSON order records)\n\n**When to set this field to 'file' (RARE use CASE)**\n\nSet this field to 'file' ONLY when the HTTP endpoint is specifically designed to download files:\n- The endpoint returns raw binary file content (PDFs, images, ZIP files, etc.)\n- The endpoint is a file download service (e.g., downloading invoices, reports, attachments)\n- The response body contains file data that needs to be saved as a file, not parsed as records\n- You need to download and process files from a remote server\n\nExamples of when to set type: \"file\":\n- \"Download PDF invoices from the API\" → type: \"file\"\n- \"Retrieve image files from a file server\" → type: \"file\"\n- \"Download CSV files from an FTP server via HTTP\" → type: \"file\"\n\n**Implementation details**\n\nWhen this field is set to 'file':\n- The 'file' object property MUST also be configured\n- The export appears as a \"Transfer\" step in the Flow Builder UI\n- The system applies file-specific processing to the HTTP response\n- Downstream steps receive file content rather than record data\n\nWhen this field is undefined (default for most exports):\n- The export appears as a standard \"Export\" step in the Flow Builder UI\n- The system parses the HTTP response as structured data (JSON, XML, etc.)\n- Downstream steps receive record data that can be mapped and transformed\n\n**Decision flowchart**\n\n1. Does the API endpoint return business records/entities (customers, orders, products, etc.)?\n   → YES: Leave this field undefined\n2. Does the API endpoint return structured data (JSON objects, XML records)?\n   → YES: Leave this field undefined\n3. Does the API endpoint return raw file content (PDFs, images, binary data)?\n   → YES: Set this field to \"file\" (and configure the 'file' property)\n\nRemember: When in doubt, leave this field undefined. Most HTTP exports are standard data exports.\n"},"method":{"type":"string","description":"HTTP method used for the export request to retrieve data from the target API.\n\n- GET: Most commonly used for data retrieval operations (default)\n- POST: Used when request body criteria are needed, especially for RPC or SOAP/XML APIs\n- PUT: Available for specific APIs that support it for data retrieval\n- PATCH/DELETE: Less common for exports but available for specialized use cases\n\nConsult your target API's documentation to determine the appropriate method.\n","enum":["GET","POST","PUT","PATCH","DELETE"]},"relativeURI":{"type":"string","description":"The resource path portion of the API endpoint used for this export.\n\nThis value is combined with the baseURI defined in the associated connection to form the complete API endpoint URL. \n\nThe entire relativeURI can be defined using handlebars expressions to create dynamic paths:\n\nExamples:\n- Simple resource paths: \"/products\", \"/orders\", \"/customers\"\n- With query parameters: \"/orders?status=pending\", \"/products?category=electronics&limit=100\"\n- With path parameters: \"/customers/{{record.customerId}}/orders\", \"/accounts/{{record.accountId}}/transactions\"\n- With dynamic query values: \"/orders?since={{lastExportDateTime}}\"\n- Fully dynamic path: \"{{record.dynamicPath}}\"\n\nPath parameters, query parameters, or the entire URI can be dynamically generated using handlebars syntax. This is particularly useful for parameterized API calls or when the endpoint needs to be determined at runtime based on data or context.\n\n**Lookup export behavior with mappings**\n\n**CRITICAL**: For lookup exports (isLookup: true) that have mappings configured, the handlebars template evaluation for relativeURI always uses the **original input record** before any mapping transformations are applied.\n\nThis design ensures that:\n- Mappings can transform the record structure for the request body without affecting URI construction\n- Essential fields like record IDs remain accessible for building dynamic endpoints\n- The request body can be optimized for the target API while preserving URI parameters\n\n**Example Scenario:**\n```\nInput record: {\"customerId\": \"12345\", \"name\": \"John Doe\", \"email\": \"john@example.com\"}\nMappings: Transform to {\"customer_name\": \"John Doe\", \"contact_email\": \"john@example.com\"}\nrelativeURI: \"/customers/{{record.customerId}}/details\"\nResult: \"/customers/12345/details\" (uses original customerId, not mapped version)\n```\n\nThis prevents situations where mapping transformations would remove or rename fields needed for endpoint construction, ensuring reliable API calls regardless of how the request body is structured.\n"},"headers":{"type":"array","description":"Export-specific HTTP headers to include with API requests. Note that common headers like authentication are typically defined on the connection record rather than here.\n\nUse this field only for headers that are specific to this particular export operation. Headers defined here will be merged with (and can override) headers from the connection.\n\nExamples of export-specific headers:\n- Accept: To request specific content format for this export only\n- X-Custom-Filter: Export-specific filtering parameters\n                \nHeader values can be defined using handlebars expressions if you need to reference any dynamic data or configurations.\n\nFor lookup exports (isLookup: true) with mappings configured, header value templates render against the **pre-mapped** record (the original input record from the upstream flow step) — mappings do not rewrite header evaluation.\n","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}}},"requestMediaType":{"type":"string","description":"Override request media type. Use this field to handle the use case where the HTTP request requires a different media type than what is configured on the connection.\n\nMost APIs use a consistent media type across all endpoints, which should be configured at the connection resource. Use this field only when:\n\n- This specific endpoint requires a different format than other endpoints in the API\n- You need to override the connection-level setting for this particular export only\n\nCommon values:\n- \"json\": For JSON request bodies (Content-Type: application/json)\n- \"xml\": For XML request bodies (Content-Type: application/xml)\n- \"urlencoded\": For URL-encoded form data (Content-Type: application/x-www-form-urlencoded)\n- \"form-data\": For multipart form data, typically used for file uploads\n- \"plaintext\": For plain text content\n","enum":["json","xml","urlencoded","form-data","plaintext"]},"body":{"type":"string","description":"The HTTP request body to send with POST, PUT, or PATCH requests. This field is typically used to:\n\n1. Send query parameters to APIs that require them in the request body (e.g., GraphQL or SOAP APIs)\n2. Provide filtering criteria for data exports\n\nThe body content must match the format specified in the requestMediaType field (JSON, XML, etc.).\n\nYou can use handlebars expressions to create dynamic content:\n```\n{\n  \"query\": \"SELECT Id, Name FROM Account WHERE LastModifiedDate > {{lastExportDateTime}}\",\n  \"parameters\": {\n    \"customerId\": \"{{record.customerId}}\",\n    \"limit\": 100\n  }\n}\n```\n\nFor XML or SOAP requests:\n```\n<request>\n  <filter>\n    <updatedSince>{{lastExportDateTime}}</updatedSince>\n    <type>{{record.type}}</type>\n  </filter>\n</request>\n```\n"},"successMediaType":{"type":"string","description":"Specifies the media type (content type) expected in successful responses for this specific export. This field should only be used when:\n\n1. The response format differs from the request format\n\nMost APIs return responses in the same format as the request, so this field is often unnecessary.\n\nCommon values:\n- \"json\": For JSON responses (typically with Content-Type: application/json)\n- \"xml\": For XML responses (typically with Content-Type: application/xml)\n- \"csv\": For CSV data (typically with Content-Type: text/csv)\n- \"plaintext\": For plain text responses\n","enum":["json","xml","csv","plaintext"]},"errorMediaType":{"type":"string","description":"Specifies the media type (content type) expected in error responses for this specific export. This field should only be used when:\n\n1. Error response format differs from the request format\n\nMost APIs return responses in the same format as the request, so this field is often unnecessary.\n\nCommon values:\n- \"json\": For JSON error responses (most common in modern APIs)\n- \"xml\": For XML error responses (common in SOAP and older REST APIs)\n- \"plaintext\": For plain text error messages\n","enum":["json","xml","plaintext"]},"_asyncHelperId":{"type":"string","format":"objectId","description":"Reference to an AsyncHelper resource that handles polling for long-running API operations.\n\nAsync helpers bridge Celigo's synchronous flow engine with asynchronous external APIs that use a \"fire-and-check-back\" pattern (HTTP 202 responses, job tickets, feed/document IDs, etc.).\n\nUse this field when the export needs to:\n- Submit a request to an API that processes data asynchronously\n- Poll for status at configured intervals\n- Retrieve results once the external process completes\n\nCommon use cases include:\n- Amazon SP-API feeds\n- Large report generators\n- File conversion services\n- Image processors\n- Any API that needs minutes or hours to complete a requested operation\n"},"once":{"type":"object","description":"HTTP configuration specific to Once exports. Used to mark records as exported after successful processing.","properties":{"relativeURI":{"type":"string","description":"The relative URI used to mark records as exported. Called as a callback to the source system after successful processing.\n\n- Must be a relative path starting with \"/\"\n- Can include Handlebars variables: \"/orders/{{record.Id}}/exported\"\n- Common patterns: dedicated status endpoint or record-specific updates\n- Renders against the **pre-mapped** record (original extracted record); mappings do not apply to the callback URI.\n"},"method":{"type":"string","description":"The HTTP method used when calling back to mark records as exported.","enum":["GET","PUT","POST","PATCH","DELETE"]},"body":{"type":"string","description":"The HTTP request body used when calling back to mark records as exported. Can include Handlebars expressions for dynamic values."}}},"paging":{"type":"object","description":"Configuration object for navigating through multi-page API responses.\n\n**Overview for ai agents**\n\nThis object is critical for retrieving large datasets that cannot be returned in a single API response.\nThe pagination implementation determines how the system will retrieve subsequent pages of data after\nthe first request, enabling complete data collection regardless of volume.\n\n**Key decision points**\n\n1. **Identify the API's pagination mechanism** (check API documentation)\n2. **Select the corresponding method** value (most important field)\n3. **Configure the required fields** based on your selected method\n4. **Add pagination variables** to your request configuration\n5. **Consider last page detection** options if needed\n\n**Field dependencies by pagination method**\n\n1. **page**: Page number-based pagination (e.g., ?page=2)\n    - Required: Set `method` to \"page\"\n    - Optional: `page` - Set if first page index is not 0 (e.g., set to 1 for APIs that start at page 1)\n    - Optional: `maxPagePath` - Path to find total pages in response\n    - Optional: `maxCountPath` - Path to find total records in response\n    - Optional: `relativeURI` - Only if subsequent page URLs differ from first page\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n2. **skip**: Offset/limit pagination (e.g., ?offset=100&limit=50)\n    - Required: Set `method` to \"skip\"\n    - Optional: `skip` - Set if first skip index is not 0\n    - Optional: `maxPagePath` - Path to find total pages in response\n    - Optional: `maxCountPath` - Path to find total records in response\n    - Optional: `relativeURI` - Only if subsequent page URLs differ from first page\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n3. **token**: Token-based pagination (e.g., ?page_token=abc123)\n    - Required: Set `method` to \"token\"\n    - Required: `path` - Location of the token in the response\n    - Required: `pathLocation` - Whether token is in \"body\" or \"header\"\n    - Optional: `token` - Set to provide initial token (rare)\n    - Optional: `pathAfterFirstRequest` - Only if token location changes after first page\n    - Optional: `relativeURI` - Only if subsequent page URLs differ from first page\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n4. **linkheader**: Link header pagination (uses HTTP Link header with rel values)\n    - Required: Set `method` to \"linkheader\"\n    - Optional: `linkHeaderRelation` - Set if relation is not the default \"next\"\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n5. **nextpageurl**: Complete next URL in response\n    - Required: Set `method` to \"nextpageurl\"\n    - Required: `path` - Location of the next URL in the response\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n6. **relativeuri**: Custom relative URI pagination\n    - Required: Set `method` to \"relativeuri\"\n    - Required: `relativeURI` - Configure using handlebars with previous_page context\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n7. **body**: Custom request body pagination\n    - Required: Set `method` to \"body\"\n    - Required: `body` - Configure using handlebars with previous_page context\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n**Pagination variables**\n\nBased on your selected method, you MUST add one of these variables to your request configuration:\n\n- For page-based: Add `{{export.http.paging.page}}` to the URI or body\n- For offset-based: Add `{{export.http.paging.skip}}` to the URI or body\n- For token-based: Add `{{export.http.paging.token}}` to the URI or body\n\n**Last page detection options**\n\nThese fields can be used with any pagination method to detect the last page:\n\n- `lastPageStatusCode` - Detect last page by HTTP status code\n- `lastPagePath` - JSON path to check for last page indicator\n- `lastPageValues` - Values at lastPagePath that indicate last page\n\n**Common implementation patterns**\n\nMost APIs require only 2-3 fields to be configured. The most common patterns are:\n\n```json\n// Page-based pagination (starting at page 1)\n{\n  \"method\": \"page\",\n  \"page\": 1\n}\n\n// Token-based pagination\n{\n  \"method\": \"token\",\n  \"path\": \"meta.nextToken\",\n  \"pathLocation\": \"body\"\n}\n\n// Link header pagination (simplest to configure)\n{\n  \"method\": \"linkheader\"\n}\n```\n\nIMPORTANT: Incorrect pagination configuration is one of the most common causes of incomplete data retrieval. Take time to properly identify and configure the correct pagination method for your API.\n","properties":{"method":{"type":"string","description":"Defines the pagination strategy that will be used to retrieve all data pages.\n\n**Importance for** AI AGENTS\n\nThis is the MOST CRITICAL field in pagination configuration. It determines:\n- Which other fields are required vs. optional\n- How subsequent pages will be requested\n- Which pagination variables must be used in requests\n- How the system detects the last page\n\n**Pagination methods and their requirements**\n\n**Page-Based** Pagination (`\"page\"`)\n```\n\"method\": \"page\"\n```\n- **Implementation**: Uses increasing page numbers (e.g., ?page=1, ?page=2)\n- **Required Setup**: Add `{{export.http.paging.page}}` to your URI or body\n- **Common Fields**: page (if starting at 1 instead of 0)\n- **API Examples**: Most REST APIs, Shopify, WordPress\n- **When to Use**: APIs that accept a page number parameter\n\n**Offset/Skip Pagination (`\"skip\"`)**\n```\n\"method\": \"skip\"\n```\n- **Implementation**: Uses increasing offset values (e.g., ?offset=0, ?offset=100)\n- **Required Setup**: Add `{{export.http.paging.skip}}` to your URI or body\n- **Common Fields**: Usually none (system handles offset increments)\n- **API Examples**: MongoDB, SQL-based APIs\n- **When to Use**: APIs that use offset/limit or skip/limit parameters\n\n**Token-Based Pagination (`\"token\"`)**\n```\n\"method\": \"token\"\n```\n- **Implementation**: Passes tokens from previous responses to get next pages\n- **Required Setup**: \n    1. Add `{{export.http.paging.token}}` to your URI or body\n    2. Set path to location of token in response\n    3. Set pathLocation to \"body\" or \"header\"\n- **API Examples**: AWS, Google Cloud, modern REST APIs\n- **When to Use**: APIs that provide continuation tokens/cursors\n\n**Link Header Pagination (`\"linkheader\"`)**\n```\n\"method\": \"linkheader\"\n```\n- **Implementation**: Follows URLs in HTTP Link headers automatically\n- **Required Setup**: None (simplest to configure)\n- **Common Fields**: Usually none (automatic)\n- **API Examples**: GitHub, GitLab, any API following RFC 5988\n- **When to Use**: APIs that return Link headers with rel=\"next\"\n\n**Next Page url (`\"nextpageurl\"`)**\n```\n\"method\": \"nextpageurl\"\n```\n- **Implementation**: Uses complete URLs returned in response body\n- **Required Setup**: Set path to location of next URL in response\n- **API Examples**: Some social media APIs, GraphQL implementations\n- **When to Use**: APIs that include complete next page URLs in responses\n\n**Custom relative URI (`\"relativeuri\"`)**\n```\n\"method\": \"relativeuri\"\n```\n- **Implementation**: Builds custom URIs based on previous responses\n- **Required Setup**: Configure relativeURI with handlebars templates\n- **When to Use**: Non-standard pagination requiring custom logic\n\n**Custom Request Body (`\"body\"`)**\n```\n\"method\": \"body\"\n```\n- **Implementation**: Creates custom request bodies for pagination\n- **Required Setup**: Configure body with handlebars templates\n- **API Examples**: GraphQL, SOAP, RPC APIs\n- **When to Use**: APIs requiring POST requests with pagination in body\n\n**Selection guidance**\n\nTo determine the correct method:\n1. Check the API documentation for pagination instructions\n2. Look for examples of multi-page requests in API samples\n3. Test with a small request to observe pagination mechanics\n4. Choose the method matching the API's expected behavior\n\nIMPORTANT: Using the wrong pagination method will result in either errors or incomplete data retrieval.\n","enum":["linkheader","page","skip","token","nextpageurl","relativeuri","body"]},"page":{"type":"integer","description":"Specifies the starting page number for page-based pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"page\"\n- OPTIONAL: Defaults to 0 if not provided\n- COMMON VALUES: 1 (most APIs), 0 (zero-indexed APIs)\n\n**Implementation guidance**\n\nThis field should be set when the API's first page is not zero-indexed. Most APIs use 1 as \ntheir first page number, in which case you should set:\n\n```json\n{\n  \"method\": \"page\",\n  \"page\": 1\n}\n```\n\nThe system will automatically increment this value for each subsequent page request.\n\n**Examples**\n\n- Shopify uses page=1 for first page\n- Some GraphQL APIs use page=0 for first page\n"},"skip":{"type":"integer","description":"Specifies the starting offset value for offset/skip-based pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"skip\"\n- OPTIONAL: Defaults to 0 if not provided\n- COMMON VALUES: 0 (vast majority of APIs)\n\n**Implementation guidance**\n\nThis field rarely needs to be set since most APIs use 0 as the starting offset.\nThe system will automatically increment this value by the pageSize for each subsequent request.\n\nExample calculation for page transitions:\n- First page: offset=0 (or your configured value)\n- Second page: offset=pageSize\n- Third page: offset=pageSize*2\n\n**When to use**\n\nOnly set this if the API requires a non-zero starting offset value, which is very uncommon.\n"},"token":{"type":"string","description":"Specifies an initial token value for token-based pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"token\"\n- OPTIONAL: Leave empty for normal pagination from the beginning\n- ADVANCED USE ONLY: Most implementations should NOT set this\n\n**Implementation guidance**\n\nToken-based pagination normally works by:\n1. Making the first request with no token\n2. Extracting a token from the response (using the path field)\n3. Using that token for the next request\n\nThis field should ONLY be set in rare scenarios:\n- Resuming a previous pagination sequence from a known token\n- APIs that require a token value even for the first request\n- Testing specific pagination scenarios\n\n**Example scenarios**\n\n```json\n// To resume pagination from a specific point:\n{\n  \"method\": \"token\",\n  \"path\": \"meta.nextToken\",\n  \"pathLocation\": \"body\",\n  \"token\": \"eyJwYWdlIjozfQ==\"\n}\n\n// For APIs requiring an initial token:\n{\n  \"method\": \"token\",\n  \"path\": \"pagination.nextToken\",\n  \"pathLocation\": \"body\",\n  \"token\": \"start\"\n}\n```\n"},"path":{"type":"string","description":"Specifies the location of pagination information in API responses.\n\n**Field behavior**\n\nThis field has different requirements based on the pagination method:\n\n- REQUIRED for method=\"token\":\n  Indicates where to find the token for the next page\n\n- REQUIRED for method=\"nextpageurl\":\n  Indicates where to find the complete URL for the next page\n\n- NOT USED for other pagination methods\n\n**Implementation guidance**\n\n**For token-based pagination (method=\"token\")**\n\n1. When pathLocation=\"body\":\n    - Set to a JSON path that points to the token in the response body\n    - Uses dot notation to navigate JSON objects\n    \n    Example response:\n    ```json\n    {\n      \"data\": [...],\n      \"meta\": {\n        \"nextToken\": \"abc123\"\n      }\n    }\n    ```\n    Correct path: \"meta.nextToken\"\n\n2. When pathLocation=\"header\":\n    - Set to the exact name of the HTTP header containing the token\n    - Case-sensitive, must match the header exactly\n    \n    Example header:\n    ```\n    X-Pagination-Token: abc123\n    ```\n    Correct path: \"X-Pagination-Token\"\n\n**For next page url pagination (method=\"nextpageurl\")**\n\n- Set to a JSON path that points to the complete URL in the response\n\nExample response:\n```json\n{\n  \"data\": [...],\n  \"pagination\": {\n    \"next_url\": \"https://api.example.com/data?page=2\"\n  }\n}\n```\nCorrect path: \"pagination.next_url\"\n\n**Common error** PATTERNS\n\n1. Missing dot notation: \"meta.nextToken\" not \"meta/nextToken\"\n2. Incorrect case: \"Meta.NextToken\" when API returns \"meta.nextToken\"\n3. Missing array indices when needed: \"items[0].next\" not \"items.next\"\n"},"pathLocation":{"type":"string","description":"Specifies where to find the pagination token in the API response.\n\n**Field behavior**\n\n- REQUIRED for method=\"token\"\n- NOT USED for other pagination methods\n- LIMITED to two possible values: \"body\" or \"header\"\n\n**Implementation guidance**\n\nWhen using token-based pagination, you must:\n1. Set method=\"token\"\n2. Set path to locate the token\n3. Set pathLocation to indicate where the token is found\n\n**When to use** \"body\":\n\nSet to \"body\" when the token is contained in the JSON response body.\nThis is the most common scenario for modern APIs.\n\nExample configuration:\n```json\n{\n  \"method\": \"token\",\n  \"path\": \"metadata.nextToken\",\n  \"pathLocation\": \"body\"\n}\n```\n\n**When to use \"header\"**\n\nSet to \"header\" when the token is returned as an HTTP header.\n\nExample configuration:\n```json\n{\n  \"method\": \"token\",\n  \"path\": \"X-Next-Page-Token\",\n  \"pathLocation\": \"header\" \n}\n```\n\n**Dependency chain**\n\nThis field participates in a critical dependency chain:\n\n1. Set method=\"token\"\n2. Set pathLocation=\"body\" or \"header\"\n3. Set path to token location based on pathLocation value\n4. Add {{export.http.paging.token}} to URI or body parameters\n\nAll four elements must be properly configured for token pagination to work.\n","enum":["body","header"]},"pathAfterFirstRequest":{"type":"string","description":"Specifies an alternative path for token extraction after the first page request.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"token\"\n- OPTIONAL: Only needed when token location changes after first page\n- Uses same format as the path field (JSON path or header name)\n\n**Implementation guidance**\n\nThis field should only be set when the API changes its response structure between\nthe first page and subsequent pages. Most APIs maintain consistent structure, but\nsome APIs may:\n\n1. Use different response formats for first vs. subsequent pages\n2. Move the token to a different location after the initial response\n3. Change the field name for the token in follow-up responses\n\nExample scenario where this is needed:\n```json\n// First page response:\n{\n  \"data\": [...],\n  \"meta\": {\n    \"initialNextToken\": \"abc123\"\n  }\n}\n\n// Subsequent page responses:\n{\n  \"data\": [...],\n  \"pagination\": {\n    \"nextToken\": \"def456\"\n  }\n}\n```\n\nIn this case:\n- path = \"meta.initialNextToken\" (for first page)\n- pathAfterFirstRequest = \"pagination.nextToken\" (for subsequent pages)\n\n**Dependency chain**\n\nThis field works in conjunction with the main path field:\n1. First request: token is extracted using the path field\n2. Subsequent requests: token is extracted using pathAfterFirstRequest\n\nIMPORTANT: Only set this field if you've verified that the API actually changes\nits response structure. Setting it unnecessarily can cause pagination to fail.\n"},"relativeURI":{"type":"string","description":"Override relative URI for subsequent page requests. This field appears as \"Override relative URI for subsequent page requests\" in the UI.\n\nThis field only needs to be set if subsequent page requests require a different relative URI than what is configured in the primary relative URI field. Most APIs use the same endpoint for all pages and vary only the query parameters, but some may require a completely different path for subsequent requests.\n\nYou can use handlebars expressions to reference data from the previous API response using the `previous_page` context object, which contains:\n\n- `previous_page.full_response` - The entire JSON response body from the previous request\n- `previous_page.last_record` - The last record from the previous page of results\n- `previous_page.headers` - All HTTP headers from the previous response\n\nCommon patterns include:\n- `{{previous_page.full_response.next_page}}` - Use a complete next page URL returned by the API\n- `/customers?page={{previous_page.full_response.page_count}}` - Use a page number from the response\n- `/orders?cursor={{previous_page.full_response.next_cursor}}` - Use a cursor/token from the response\n\nThe exact structure of data available depends on your specific API's response format.\n\nLeave this field empty if the main relative URI can be used for all page requests.\n"},"body":{"type":"string","description":"Override HTTP request body for subsequent page requests. This field appears as \"Override HTTP request body for subsequent page requests\" in the UI.\n\nThis field only needs to be set if subsequent page requests require a different HTTP request body than what is configured in the primary HTTP request body field. Most APIs use query parameters for pagination, but some (especially GraphQL or SOAP APIs) may require pagination parameters to be sent in the request body.\n\nYou can use handlebars expressions to reference data from the previous API response using the `previous_page` context object, which contains:\n\n- `previous_page.full_response` - The entire JSON response body from the previous request\n- `previous_page.last_record` - The last record from the previous page of results\n- `previous_page.headers` - All HTTP headers from the previous response\n\nCommon patterns include:\n- Including the next cursor in a GraphQL query: `{\"query\": \"...\", \"variables\": {\"cursor\": \"{{previous_page.full_response.pageInfo.endCursor}}\"}}`\n- Using the last record's ID: `{\"after\": \"{{previous_page.last_record.id}}\", \"limit\": 100}`\n- Including a page number: `{\"page\": {{previous_page.full_response.meta.next_page}}, \"pageSize\": 50}`\n\nThe exact structure of data available depends on your specific API's response format.\n\nLeave this field empty if the main HTTP request body can be used for all page requests.\n"},"linkHeaderRelation":{"type":"string","description":"Specifies which relation in the Link header to use for pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"linkheader\"\n- OPTIONAL: Defaults to \"next\" if not provided\n- Case-sensitive value matching the rel attribute in Link header\n\n**Implementation guidance**\n\nLink header pagination follows the RFC 5988 standard where pagination links\nare provided in HTTP headers. A typical Link header looks like:\n\n```\nLink: <https://api.example.com/items?page=2>; rel=\"next\", <https://api.example.com/items?page=1>; rel=\"prev\"\n```\n\nThis field allows you to specify which relation type to follow for pagination:\n\n```\n\"linkHeaderRelation\": \"next\"  // Default value\n```\n\nSome APIs use non-standard relation names, which is when you'd need to change this:\n\n```\n\"linkHeaderRelation\": \"successor\"  // Custom relation name\n```\n\n**Common values**\n\n- \"next\" (default): Standard for most RFC 5988 compliant APIs\n- \"successor\": Alternative used by some APIs\n- \"forward\": Alternative used by some APIs\n- \"nextpage\": Non-standard but used by some implementations\n\nIMPORTANT: This is case-sensitive and must exactly match the relation value in\nthe Link header. If the API includes the prefix \"rel=\" in the header, do NOT\ninclude it here.\n"},"resourcePath":{"type":"string","description":"Override path to records for subsequent page requests. This field appears as \"Override path to records for subsequent page requests\" in the UI.\n\nThis field only needs to be set if subsequent page requests return a different response structure, and the records are located in a different place than the original request.\n\nFor example, if the first request returns records in a structure like {\"data\": [...]} but subsequent page responses have records in {\"results\": [...]} instead, you would set this field to \"results\" to correctly extract data from the follow-up pages.\n\nLeave this field empty if all pages use the same response structure.\n"},"lastPageStatusCode":{"type":"integer","description":"Specifies a custom HTTP status code that indicates the last page of results.\n\n**Field behavior**\n\n- OPTIONAL: Only needed for APIs with non-standard last page indicators\n- Applies to all pagination methods\n- Overrides the default 404 end-of-pagination detection\n\n**Implementation guidance**\n\nBy default, the system treats a 404 status code as an indicator that\npagination is complete. This field allows you to specify a different\nstatus code if your API uses an alternative convention.\n\nCommon scenarios where this is needed:\n\n1. APIs that return 204 (No Content) for empty result sets\n```\n\"lastPageStatusCode\": 204\n```\n\n2. APIs that return 400 (Bad Request) when requesting beyond available pages\n```\n\"lastPageStatusCode\": 400\n```\n\n3. APIs with custom error codes for pagination completion\n```\n\"lastPageStatusCode\": 499\n```\n\n**Technical details**\n\nWhen this status code is received, the system:\n- Stops the pagination process\n- Considers the data collection complete\n- Does not treat the response as an error\n- Does not attempt to process any response body\n\nIMPORTANT: Only set this if your API explicitly uses a non-404 status code\nto indicate the end of pagination. Setting this incorrectly could cause\npremature termination of data collection or error handling issues.\n"},"lastPagePath":{"type":"string","description":"Specifies a JSON path to a field that indicates the end of pagination.\n\n**Field behavior**\n\n- OPTIONAL: Only needed for APIs with field-based pagination completion signals\n- Works with all pagination methods\n- Used in conjunction with lastPageValues\n- JSON path notation to a field in the response body\n\n**Implementation guidance**\n\nThis field is used when an API indicates the last page through a field\nin the response body rather than using HTTP status codes. The system\nchecks this path in each response to determine if pagination is complete.\n\nCommon patterns include:\n\n1. Boolean flag fields\n```\n\"lastPagePath\": \"meta.isLastPage\"\n```\n\n2. \"Has more\" indicators\n```\n\"lastPagePath\": \"pagination.hasMore\"\n```\n\n3. Cursor/token fields that are null/empty on the last page\n```\n\"lastPagePath\": \"meta.nextCursor\"\n```\n\n4. Error message fields\n```\n\"lastPagePath\": \"error.message\"\n```\n\n**Dependency chain**\n\nThis field must be used with lastPageValues, which specifies the value(s)\nat this path that indicate pagination is complete. For example:\n\n```json\n\"lastPagePath\": \"pagination.hasMore\",\n\"lastPageValues\": [\"false\", \"0\"]\n```\n\nIMPORTANT: The path is evaluated against each response using JSON path notation.\nIf the path doesn't exist in the response, the condition is not considered met.\n"},"lastPageValues":{"type":"array","description":"Specifies which value(s) at the lastPagePath indicate the end of pagination.\n\n**Field behavior**\n\n- REQUIRED when lastPagePath is used\n- Array of string values (even for boolean or numeric comparisons)\n- Case-sensitive matching against the value at lastPagePath\n- Multiple values create an OR condition (any match indicates last page)\n\n**Implementation guidance**\n\nThis field works in conjunction with lastPagePath to determine when\npagination is complete. The system looks for the field specified by\nlastPagePath and compares its value against each entry in this array.\n\nCommon patterns include:\n\n1. For boolean \"isLastPage\" flags (true means last page)\n```json\n\"lastPagePath\": \"meta.isLastPage\",\n\"lastPageValues\": [\"true\"]\n```\n\n2. For \"hasMore\" flags (false means last page)\n```json\n\"lastPagePath\": \"pagination.hasMore\",\n\"lastPageValues\": [\"false\", \"0\"]\n```\n\n3. For empty cursors (null/empty string means last page)\n```json\n\"lastPagePath\": \"meta.nextCursor\",\n\"lastPageValues\": [\"null\", \"\"]\n```\n\n4. For specific error messages\n```json\n\"lastPagePath\": \"error.message\",\n\"lastPageValues\": [\"No more pages\", \"End of results\"]\n```\n\n**Technical details**\n\n- All values must be specified as strings, even for boolean or numeric comparisons\n- JSON null should be represented as the string \"null\"\n- Empty string is represented as \"\"\n- The comparison is exact and case-sensitive\n\nIMPORTANT: This field is only considered when the lastPagePath exists in the\nresponse. Both lastPagePath and lastPageValues must be configured correctly\nfor proper pagination termination.\n","items":{"type":"string"}},"maxPagePath":{"type":"string","description":"Specifies a JSON path to a field containing the total number of pages available.\n\n**Field behavior**\n\n- OPTIONAL: Only relevant for \"page\" and \"skip\" pagination methods\n- JSON path to a numeric field in the response\n- Used to optimize pagination by detecting the last page early\n- Ignored for other pagination methods\n\n**Implementation guidance**\n\nThis field enables pagination optimization when an API includes metadata\nabout the total number of pages. When configured, the system:\n\n1. Extracts the total page count from each response\n2. Compares the current page number against this total\n3. Stops pagination when the maximum page is reached\n\nCommon API response patterns include:\n\n```json\n// Pattern 1: Metadata section with page counts\n{\n  \"data\": [...],\n  \"meta\": {\n    \"totalPages\": 5,\n    \"currentPage\": 2\n  }\n}\n\n// Pattern 2: Pagination object\n{\n  \"results\": [...],\n  \"pagination\": {\n    \"pageCount\": 5,\n    \"page\": 2\n  }\n}\n\n// Pattern 3: Root level pagination info\n{\n  \"items\": [...],\n  \"pages\": 5,\n  \"current\": 2\n}\n```\n\n**Usage scenarios**\n\nMost useful when:\n- The API reliably includes total page counts\n- You want to prevent unnecessary requests after the last page\n- The 404/last page detection mechanisms aren't suitable\n\nIMPORTANT: This field should point to the TOTAL number of pages,\nnot the current page number. The value must be numeric (integer).\n"},"maxCountPath":{"type":"string","description":"Specifies a JSON path to a field containing the total number of records available.\n\n**Field behavior**\n\n- OPTIONAL: Only relevant for \"page\" and \"skip\" pagination methods\n- JSON path to a numeric field in the response\n- Alternative to maxPagePath for record-based termination\n- Used when APIs provide total record count instead of page count\n\n**Implementation guidance**\n\nThis field enables pagination optimization when an API includes metadata\nabout the total number of records rather than pages. When configured,\nthe system:\n\n1. Extracts the total record count from each response\n2. Tracks the total number of records processed so far\n3. Stops pagination when all records have been processed\n\nCommon API response patterns include:\n\n```json\n// Pattern 1: Metadata section with record counts\n{\n  \"data\": [...],\n  \"meta\": {\n    \"totalCount\": 42,\n    \"page\": 2,\n    \"pageSize\": 10\n  }\n}\n\n// Pattern 2: Pagination object\n{\n  \"results\": [...],\n  \"pagination\": {\n    \"total\": 42,\n    \"offset\": 20,\n    \"limit\": 10\n  }\n}\n\n// Pattern 3: Root level count info\n{\n  \"items\": [...],\n  \"count\": 42,\n  \"page\": 2\n}\n```\n\n**Relationship with maxPagePath**\n\nThis field is an alternative to maxPagePath:\n- Use maxPagePath when the API provides a total page count\n- Use maxCountPath when the API provides a total record count\n- If both are provided, maxPagePath takes precedence\n\nIMPORTANT: This field should point to the TOTAL number of records,\nnot the number of records in the current page. The value must be\nnumeric (integer).\n"}}},"response":{"type":"object","description":"Configuration for parsing and interpreting HTTP responses returned by the source API.\n\nThis object tells the export engine how to extract records from the API response body\nand how to detect success or failure at the response level.\n\n**Most important field:** resourcePath\n\n`resourcePath` is the single most commonly needed field in this object. When an API\nwraps its records inside a JSON envelope, you MUST set resourcePath to the dot-path\nthat points to the array of records. Without it, the export treats the entire response\nas a single record.\n\nExample API response:\n```json\n{\n  \"status\": \"ok\",\n  \"data\": {\n    \"customers\": [\n      {\"id\": 1, \"name\": \"Alice\"},\n      {\"id\": 2, \"name\": \"Bob\"}\n    ]\n  }\n}\n```\n→ Set `resourcePath` to `data.customers` so the export produces 2 records.\n\n**When to leave this object undefined**\n\nIf the API returns a bare JSON array (e.g. `[{\"id\":1}, {\"id\":2}]`) with no\nwrapper object, you do not need this object at all.\n","properties":{"resourcePath":{"type":"string","description":"The dot-separated path to the array of records inside the API response body.\n\n**Critical field for correct data extraction**\n\nMost APIs wrap their data in an envelope object. This field tells the export\nwhere to find the actual records within that envelope. Without this field,\nthe export treats the entire response body as a single record, which is\nalmost never the desired behavior when the response has a wrapper.\n\n**How it works**\n\nGiven an API response like:\n```json\n{\n  \"meta\": {\"page\": 1, \"total\": 42},\n  \"results\": [\n    {\"id\": \"A\", \"value\": 10},\n    {\"id\": \"B\", \"value\": 20}\n  ]\n}\n```\nSetting `resourcePath` to `results` causes the export to produce 2 records\n(`{\"id\":\"A\",\"value\":10}` and `{\"id\":\"B\",\"value\":20}`).\n\nFor deeply nested responses:\n```json\n{\n  \"slideshow\": {\n    \"slides\": [{\"title\": \"Slide 1\"}, {\"title\": \"Slide 2\"}]\n  }\n}\n```\nSet `resourcePath` to `slideshow.slides` to get each slide as a record.\n\n**When to set this field**\n\n- The API response is a JSON object (not a bare array) and the records are\n  nested inside it → set this to the path\n- The API response is a bare JSON array → leave undefined (records are\n  already at the top level)\n\n**Common patterns**\n\n| API response structure | resourcePath value |\n|---|---|\n| `{\"data\": [...]}` | `data` |\n| `{\"results\": [...]}` | `results` |\n| `{\"items\": [...]}` | `items` |\n| `{\"records\": [...]}` | `records` |\n| `{\"response\": {\"data\": [...]}}` | `response.data` |\n| `{\"slideshow\": {\"slides\": [...]}}` | `slideshow.slides` |\n| `[...]` (bare array) | leave undefined |\n\n**Important distinction**\n\nThis field extracts records from the **API response**. Do NOT confuse it with:\n- `oneToMany` + `pathToMany` — which unwrap child arrays from *input records*\n  in lookup/import steps (a completely different mechanism)\n- `paging.resourcePath` — which overrides the record location for *subsequent*\n  page requests only (when follow-up pages use a different response structure)\n"},"resourceIdPath":{"type":"string","description":"Path to the unique identifier field within each individual record in the response.\n\nUsed primarily when processing results of asynchronous import responses.\nIf not specified, the system looks for standard `id` or `_id` fields automatically.\n"},"successPath":{"type":"string","description":"Path to a field in the response that indicates whether the API call succeeded.\n\nUse this when the API returns HTTP 200 for all requests but signals success or\nfailure through a field in the response body.\n\nMust be used together with `successValues` to define which values at this path\nindicate success.\n\nExample: If the API returns `{\"status\": \"ok\", \"data\": [...]}`, set\n`successPath` to `status` and `successValues` to `[\"ok\"]`.\n"},"successValues":{"type":"array","items":{"type":"string"},"description":"Values at the `successPath` location that indicate the API call was successful.\n\nWhen the value at `successPath` matches any entry in this array, the response\nis treated as successful. If the value does not match, the response is treated\nas an error.\n\nAll values are compared as strings. For boolean fields, use `\"true\"` or `\"false\"`.\n"},"errorPath":{"type":"string","description":"Path to the error message field in the response body.\n\nUsed to extract a meaningful error message when the API returns an error\nresponse. The value at this path is included in error logs and error records.\n"},"failPath":{"type":"string","description":"Path to a field that identifies a failed response even when the HTTP status code is 200.\n\nSimilar to `successPath` but inverted logic — checks for failure indicators.\nMust be used together with `failValues`.\n"},"failValues":{"type":"array","items":{"type":"string"},"description":"Values at the `failPath` location that indicate the API call failed.\n\nWhen the value at `failPath` matches any entry in this array, the response\nis treated as a failure even if the HTTP status code was 200.\n"},"blobFormat":{"type":"string","description":"Character encoding for blob export responses.\n\nOnly relevant when the export type is \"blob\" (http.type = \"file\" or\nexport type = \"blob\"). Specifies how to decode the binary response body.\n","enum":["utf8","ucs2","utf-16le","ascii","binary","base64","hex"]}}},"_httpConnectorVersionId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector version being used."},"_httpConnectorResourceId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector resource being used."},"sendAuthForFileDownloads":{"type":"boolean","description":"Whether to include authentication headers when downloading files."},"_httpConnectorEndpointId":{"type":"string","format":"objectId","description":"_httpConnectorEndpointId: Identifier for the HTTP connector endpoint used in the integration or API call. This identifier uniquely specifies which HTTP connector endpoint configuration should be utilized to route and process the request within the system. It ensures that API calls are directed to the correct external or internal HTTP service endpoint as defined in the integration setup.\n\n**Field behavior**\n- Uniquely identifies the HTTP connector endpoint within the system to ensure precise routing.\n- Used during the execution of integrations or API calls to select the appropriate HTTP endpoint configuration.\n- Typically required and must be specified when configuring or invoking HTTP-based connectors.\n- Once set, it generally remains immutable to maintain consistent routing behavior.\n- Acts as a key reference to the endpoint’s configuration, including URL, headers, authentication, and other settings.\n\n**Implementation guidance**\n- Must correspond to a valid and existing HTTP connector endpoint identifier within the system.\n- Validate the identifier against the current list of configured HTTP connector endpoints before use to prevent errors.\n- Avoid changing the identifier after initial assignment to prevent routing inconsistencies.\n- Ensure that the endpoint configuration referenced by this ID is active and properly configured.\n- Implement error handling for cases where the identifier does not match any existing endpoint.\n\n**Examples**\n- \"endpoint_12345\"\n- \"httpConnectorEndpoint_abcde\"\n- \"prodHttpEndpoint01\"\n- \"dev_api_gateway_2024\"\n- \"externalServiceEndpoint-01\"\n\n**Important notes**\n- This field is critical for directing API calls to the correct HTTP endpoint; incorrect values can cause failed connections or routing errors.\n- Missing or invalid identifiers will typically result in integration failures or exceptions.\n- Proper permissions and access controls must be in place to use the specified HTTP connector endpoint.\n- Changes to the endpoint configuration referenced by this ID may affect all integrations relying on it.\n- The identifier should be managed carefully in deployment and version control processes.\n\n**Dependency chain**\n- Depends on the existence and configuration of HTTP connector endpoints within the system.\n- May be linked to authentication, authorization, and security settings associated with the endpoint.\n- Integration logic or API call workflows depend on this identifier to resolve the correct HTTP endpoint.\n- Changes in endpoint configurations or identifiers may require updates in dependent integrations.\n\n**Technical details**\n- Data type: String\n- Format: Alphanumeric identifier, typically allowing underscores (_), dashes (-), and possibly other URL-safe characters.\n- Stored as a reference or foreign key linking to the HTTP connector"}}},"Salesforce":{"type":"object","description":"Configuration object for Salesforce data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a Salesforce connection\nand must not be included for other connection types. It defines how data is extracted\nfrom Salesforce, either through queries or real-time events.\n\nFor optimal AI agent implementation, consider these guidelines:\n\n**Salesforce export modes**\n\nSalesforce exports offer two fundamentally different operating modes:\n\n1. **SOQL Query-based Exports** (type=\"soql\")\n    - Scheduled or on-demand batch processing\n    - Uses SOQL queries to retrieve data\n    - Supports both REST and Bulk API\n    - Can be configured as lookups (isLookup=true)\n    - Requires the \"soql\" object with query configuration\n\n2. **Real-time Event Listeners** (type=\"distributed\")\n    - Responds to Salesforce events as they happen\n    - Uses Salesforce's streaming API and platform events\n    - Always appears as a \"Listener\" in the flow builder UI\n    - Requires the \"distributed\" object with event configuration\n\n3. **File/Blob Exports** (when export.type=\"blob\")\n    - Retrieves files stored in Salesforce\n    - Requires sObjectType and id fields\n    - Supports Attachments, ContentVersion, and Document objects\n\n**Implementation requirements**\n\nThe salesforce object has conditional requirements based on the selected type:\n\n- For SOQL exports (type=\"soql\"):\n  Required fields: type, soql.query\n  Optional fields: api, includeDeletedRecords, bulk (when api=\"bulk\")\n\n- For Distributed exports (type=\"distributed\"):\n  Required fields: type, distributed configuration\n  Optional fields: distributed.referencedFields, distributed.qualifier\n\n- For Blob exports (when export.type=\"blob\"):\n  Required fields: sObjectType, id\n","properties":{"type":{"type":"string","description":"Defines the fundamental data extraction method for Salesforce exports.\n\n    **Field behavior**\n\nThis field determines the core operating mode of the Salesforce export:\n\n- REQUIRED for all Salesforce exports\n- Controls which additional configuration objects must be provided\n- Affects how the export appears and functions in the flow builder UI\n- Cannot be changed after creation without significant reconfiguration\n\n**Available types**\n\n**Soql Query-based Export**\n```\n\"type\": \"soql\"\n```\n\n- **Behavior**: Executes SOQL queries against Salesforce on schedule or demand\n- **UI Appearance**: \"Export\" or \"Lookup\" based on isLookup value\n- **Required Config**: Must provide the \"soql\" object with a valid query\n- **Use Cases**: Batch data extraction, delta synchronization, data migration\n- **Dependencies**:\n  - Compatible with both \"rest\" and \"bulk\" API options\n  - Works with standard, delta, test, and once export types\n\n**Real-time Event Listener**\n```\n\"type\": \"distributed\"\n```\n\n- **Behavior**: Listens for real-time Salesforce events (create/update/delete)\n- **UI Appearance**: Always appears as a \"Listener\" in the flow builder\n- **Required Config**: Must provide the \"distributed\" object with event configuration\n- **Use Cases**: Real-time synchronization, event-driven integration\n- **Dependencies**:\n  - Only uses REST API (api field is ignored)\n  - Automatically configured with trigger logic in Salesforce\n  - Only compatible with standard export type (ignores delta/test/once)\n\n**Implementation considerations**\n\nThe type selection creates a fundamental difference in how data flows:\n\n- \"soql\" operates on a pull model where the integration initiates data retrieval\n- \"distributed\" operates on a push model where Salesforce events trigger the integration\n\nIMPORTANT: Choose \"soql\" for batch processing and lookups; choose \"distributed\" for\nreal-time event handling. This decision affects all other configuration aspects.\n","enum":["soql","distributed"]},"sObjectType":{"type":"string","description":"Specifies the Salesforce object type for the export operation.\n\n**Field behavior**\n\nThis field determines which Salesforce object is being exported:\n\n- **REQUIRED** when the parent export's type is \"distributed\"\n- **REQUIRED** when the parent export's type is \"blob\"\n- Optional for \"soql\" exports (can be inferred from the SOQL query)\n- Must be a valid Salesforce object API name\n\n**Use cases by export type**\n\n**Distributed Exports**\n```\n\"sObjectType\": \"Account\"\n\"sObjectType\": \"Contact\"\n\"sObjectType\": \"Opportunity\"\n\"sObjectType\": \"Custom_Object__c\"\n```\n\n- **Purpose**: Specifies the primary object type being exported from Salesforce\n- **Valid Values**: Any standard or custom Salesforce object (Account, Contact, Opportunity, Lead, Case, Custom_Object__c, etc.)\n- **API Access**: Uses the specified object's metadata and SOQL/REST APIs\n- **Use Cases**: Real-time distributed processing of Salesforce records\n- **Requirements**: Object must exist in the connected Salesforce org\n\n**Blob/File Exports**\n```\n\"sObjectType\": \"Attachment\"\n\"sObjectType\": \"ContentVersion\"\n\"sObjectType\": \"Document\"\n```\n\n- **Purpose**: Specifies which Salesforce file storage object contains the file data\n- **Valid Values**: File storage objects only (Attachment, ContentVersion, Document)\n- **API Access**: Uses file-specific APIs for data retrieval\n- **Use Cases**: Extracting files and binary data from Salesforce\n- **Requirements**: Must be used with the \"id\" field to specify the file record\n\n**Soql Exports**\n```\n\"sObjectType\": \"Account\"  // Optional - can be inferred from query\n```\n\n- **Purpose**: Optional hint about the primary object in the SOQL query\n- **Valid Values**: Any Salesforce object referenced in the query\n- **Use Cases**: Query optimization and metadata context\n\n**Implementation notes**\n\nFor distributed exports, this field is essential for:\n- Setting up proper event listeners and triggers\n- Configuring field metadata and validation\n- Enabling related object processing\n- Determining appropriate API endpoints\n\nFor blob exports, this field works with the \"id\" field to retrieve specific file records.\n\nIMPORTANT: The object specified must exist in the target Salesforce org and be accessible\nto the integration user account.\n"},"id":{"type":"string","description":"Specifies the Salesforce record ID of the file to retrieve for blob exports.\n\n**Field behavior**\n\nThis field identifies the specific file record in Salesforce:\n\n- REQUIRED when the parent export's type is \"blob\"\n- Must be a valid Salesforce ID or a handlebars expression\n- Used in conjunction with sObjectType to retrieve the file\n- Not used for regular data exports (type=\"soql\" or \"distributed\")\n\n**Implementation patterns**\n\n**Static File id**\n```\n\"id\": \"00P5f00000ZQcTZEA1\"\n```\n\n- References a specific, fixed file in Salesforce\n- Useful for retrieving standard documents or templates\n- Always retrieves the same file on each execution\n- Simple to configure but lacks flexibility\n\n**Dynamic File id (Handlebars)**\n```\n\"id\": \"{{record.Attachment_Id__c}}\"\n```\n\n- References a file ID from input data using handlebars\n- Requires the export to be used as a lookup (isLookup=true)\n- Dynamically determines which file to retrieve at runtime\n- Allows for contextual file retrieval based on previous steps\n\n**Technical details**\n\n- For ContentVersion objects, this should be the ContentVersion ID\n- For Attachment objects, this should be the Attachment ID\n- For Document objects, this should be the Document ID\n\nIMPORTANT: Salesforce IDs are 15 or 18 characters, case-sensitive for 15-character\nversions, and case-insensitive for 18-character versions. When using handlebars,\nensure the referenced field contains a valid Salesforce ID.\n"},"includeDeletedRecords":{"type":"boolean","description":"Controls whether the export retrieves records from the Salesforce Recycle Bin.\n\n**Field behavior**\n\nThis field enables access to recently deleted records:\n\n- OPTIONAL: Defaults to false if not specified\n- Only relevant for SOQL exports (type=\"soql\")\n- Ignored for distributed exports and blob exports\n- Changes the underlying API method used for queries\n\n**Implementation impact**\n\nWhen set to true:\n- Salesforce's queryAll() API method is used instead of query()\n- Records in the Recycle Bin (deleted within the past 15 days) are included\n- Each record contains an \"IsDeleted\" field to identify deleted status\n- API usage may be higher as queryAll() counts differently against limits\n\n**Use cases**\n\nThis field is particularly useful for:\n\n- Synchronizing deletion operations to target systems\n- Building data recovery/rollback mechanisms\n- Maintaining a complete audit trail including deleted records\n- Implementing soft-delete patterns across integrated systems\n\n**Technical considerations**\n\n- Records in the Recycle Bin are only available for up to 15 days\n- Hard-deleted records (emptied from Recycle Bin) are not accessible\n- The IsDeleted field should be checked to identify deleted records\n- May increase response size and processing time slightly\n\nIMPORTANT: This feature only works with SOQL exports (type=\"soql\") and is ignored\nfor distributed exports (type=\"distributed\") since those operate on events rather\nthan queries.\n","default":false},"api":{"type":"string","description":"Specifies which Salesforce API to use for retrieving data.\n\n**Field behavior**\n\nThis field controls the underlying API technology:\n\n- OPTIONAL: Defaults to \"rest\" if not specified\n- Only relevant for SOQL exports (type=\"soql\")\n- Ignored for distributed exports and blob exports\n- Determines performance characteristics and compatibility\n\n**Available APIs**\n\n**Rest api**\n```\n\"api\": \"rest\"\n```\n\n- **Performance**: Optimized for immediate response and smaller datasets\n- **Concurrency**: Higher - multiple queries can run simultaneously\n- **Data Volume**: Best for <10,000 records\n- **Use Cases**: Lookups, real-time queries, smaller datasets\n- **Special Features**: Required for lookup exports (isLookup=true)\n\n**Bulk api 2.0**\n```\n\"api\": \"bulk\"\n```\n\n- **Performance**: Optimized for large data volumes, higher throughput\n- **Concurrency**: Lower - utilizes a job queuing system\n- **Data Volume**: Best for >=10,000 records\n- **Use Cases**: Large data migrations, full dataset exports, reports\n- **Special Features**: Requires \"bulk\" object configuration for settings\n\n**Dependencies and constraints**\n\n- When isLookup=true, api must be set to \"rest\" (or left as default)\n- When api=\"bulk\", the bulk object can be configured for additional options\n- Bulk API introduces slight processing latency but handles larger volumes\n- REST API provides immediate results but may time out with very large queries\n\n**Selection guidance**\n\nChoose based on your data volume and response time needs:\n\n- For smaller datasets (<10,000 records) or lookups: use \"rest\"\n- For larger datasets or background processing: use \"bulk\"\n- When immediacy is critical: use \"rest\"\n- When throughput is critical: use \"bulk\"\n\nIMPORTANT: The Bulk API is not compatible with lookup exports (isLookup=true).\nIf your export is configured as a lookup, you must use the REST API.\n","enum":["rest","bulk"]},"bulk":{"type":"object","description":"Configuration parameters for Salesforce Bulk API 2.0 exports.\n\n**Field behavior**\n\nThis object contains settings specific to Bulk API operations:\n\n- REQUIRED when api=\"bulk\" and type=\"soql\"\n- Ignored when api=\"rest\" or type=\"distributed\"\n- Controls behavior of Salesforce Bulk API jobs\n- Provides optimization options for large data volumes\n\n**Implementation context**\n\nThe Bulk API operates differently from REST API:\n- Creates asynchronous jobs in Salesforce\n- Processes records in batches for higher throughput\n- Optimized for transferring large datasets\n- Has different governor limits and behavior\n","properties":{"maxRecords":{"type":"integer","description":"Specifies the maximum number of records to retrieve in a single Bulk API job.\n\n**Field behavior**\n\nThis field controls query result size:\n\n- OPTIONAL: Uses Salesforce's default if not specified\n- Sets the `maxRecords` parameter on Bulk API requests\n- Only applicable when api=\"bulk\" and type=\"soql\"\n- Helps prevent timeouts with complex queries or large record sizes\n\n**Technical considerations**\n\n- Different Salesforce editions have different limits\n- Values too high may cause timeouts with complex records\n- Values too low may require multiple API calls\n- Standard objects typically support higher limits than custom objects\n\n**Optimization guidance**\n\n- For simple records (few fields): Higher values improve throughput\n- For complex records (many fields): Lower values prevent timeouts\n- For standard objects: 50,000 is usually safe\n- For custom objects: 10,000-25,000 is recommended\n\nIMPORTANT: The Salesforce Bulk API 2.0 has a hard limit of 100 million records\nper job, but practical limits are typically much lower based on record complexity\nand Salesforce instance capacity.\n","minimum":10000},"purgeJobAfterExport":{"type":"boolean","description":"Controls whether Bulk API jobs are automatically deleted after completion.\n\n**Field behavior**\n\nThis field manages job cleanup:\n\n- OPTIONAL: Defaults to false if not specified\n- When true, deletes the Bulk API job after all data is retrieved\n- Only applicable when api=\"bulk\" and type=\"soql\"\n- Has no effect on the actual data retrieval or results\n\n**Implementation impact**\n\nWhen enabled (true):\n- Reduces clutter in the Salesforce Bulk Data Load Jobs UI\n- Prevents accumulation of completed jobs\n- May help stay under job retention limits\n- Makes job details unavailable for later troubleshooting\n\nWhen disabled (false):\n- Preserves job history for troubleshooting\n- Allows reviewing job details in Salesforce\n- May accumulate many jobs over time\n\n**Best practices**\n\n- For production environments: Set to true for cleanliness\n- For testing/development: Set to false for easier debugging\n- For audit-heavy environments: Set to false if job history is needed\n\nIMPORTANT: This setting only affects job metadata cleanup in Salesforce.\nIt has no impact on the actual data retrieved or the success of the export.\n"}}},"soql":{"type":"object","description":"Configuration for SOQL query-based Salesforce exports.\n\n**Field behavior**\n\nThis object contains the SOQL query settings:\n\n- REQUIRED when type=\"soql\"\n- Not used when type=\"distributed\" or for blob exports\n- Controls what data is retrieved from Salesforce\n- Works with both REST API and Bulk API methods\n\n**Implementation requirements**\n\nThe soql object must include a valid query that follows Salesforce SOQL syntax.\nThe query determines:\n- Which objects are accessed\n- Which fields are retrieved\n- What filtering conditions are applied\n- How results are sorted and limited\n","properties":{"query":{"type":"string","description":"The SOQL query that defines what data to retrieve from Salesforce.\n\n**Field behavior**\n\nThis field contains the actual SOQL statement:\n\n- REQUIRED when type=\"soql\"\n- Must follow Salesforce Object Query Language syntax\n- Passed directly to Salesforce API endpoints\n- Can include dynamic values via handlebars\n\n**Query structure elements**\n\nA complete SOQL query typically includes:\n\n**Field Selection**\n```\nSELECT Id, Name, Email, Phone, Account.Name\n```\n- List specific fields to retrieve\n- Include relationship fields using dot notation\n- Use * sparingly (only with specific sObjects that support it)\n\n**Object Selection**\n```\nFROM Contact\n```\n- Specifies the Salesforce object to query\n- Must be a valid API name (not label)\n- Case-sensitive (match Salesforce API names exactly)\n\n**Filter Conditions**\n```\nWHERE LastModifiedDate > {{lastExportDateTime}}\nAND IsActive = true\n```\n- Limits which records are returned\n- Can reference handlebars variables (e.g., for delta exports)\n- Supports standard operators (=, !=, >, <, LIKE, IN, etc.)\n\n**Relationship Queries**\n```\nSELECT Account.Id, (SELECT Id, FirstName FROM Contacts)\nFROM Account\n```\n- Retrieves parent and child records in a single query\n- Helps reduce API calls for related data\n- Supports both lookup and master-detail relationships\n\n**Implementation best practices**\n\n- Select only the fields you need (improves performance)\n- Use WHERE clauses to limit data volume\n- For delta exports, use LastModifiedDate with {{lastExportDateTime}}\n- Use ORDER BY for consistent results across multiple pages\n- Avoid SOQL functions in filters when using Bulk API\n\n**Technical limits**\n\n- Maximum query length: 20,000 characters\n- Maximum relationships traversed: 5 levels\n- Maximum subquery levels: 1 (no nested subqueries)\n- Maximum batch size varies by API (REST: 2,000, Bulk: 10,000+)\n\nIMPORTANT: When using relationship queries, child objects count against\ngovernor limits differently. For bulk processing of many parent-child records,\nconsider separate queries or the oneToMany export setting.\n","maxLength":200000}}},"distributed":{"type":"object","description":"Configuration for real-time Salesforce event-driven exports.\n\n**Field behavior**\n\nThis object defines real-time event listener settings:\n\n- REQUIRED when type=\"distributed\"\n- Not used when type=\"soql\" or for blob exports\n- Creates push-based integration triggered by Salesforce events\n- Implements real-time processing of creates, updates, and deletes\n\n**Implementation context**\n\nDistributed exports work fundamentally differently from SOQL exports:\n- No scheduling or manual execution required\n- Triggered automatically when records change in Salesforce\n- Data flows in real-time as events occur\n- Uses Salesforce's platform events and streaming API\n\n**Technical architecture**\n\nWhen configured, the system:\n1. Creates custom triggers in the connected Salesforce org\n2. Establishes event listeners for the specified objects\n3. Processes events as they occur (create/update/delete operations)\n4. Delivers the changed records to the integration flow\n","properties":{"referencedFields":{"type":"array","description":"Specifies additional fields to retrieve from related objects via relationships.\n\n**Field behavior**\n\nThis field extends the data retrieval beyond the primary object:\n\n- OPTIONAL: If omitted, only direct fields are retrieved\n- Each entry specifies a field on a related object using dot notation\n- Values are included in the exported record data\n- Only works with lookup and master-detail relationships\n\n**Implementation patterns**\n\n**Parent Object Fields**\n```\n[\"Account.Name\", \"Account.Industry\", \"Account.BillingCity\"]\n```\n- Retrieves fields from parent objects\n- Useful for including context from parent records\n- Common for child objects like Contacts, Opportunities\n\n**User/Owner Fields**\n```\n[\"Owner.Email\", \"CreatedBy.Name\", \"LastModifiedBy.Username\"]\n```\n- Retrieves fields from standard user relationship fields\n- Provides attribution information\n- Useful for auditing and notification scenarios\n\n**Custom Relationship Fields**\n```\n[\"Custom_Lookup__r.Field_Name__c\", \"Another_Relation__r.Status__c\"]\n```\n- Works with custom relationship fields\n- Uses __r suffix for the relationship name\n- Can access standard or custom fields on the related object\n\n**Technical considerations**\n\n- Maximum 10 unique referenced relationships per export\n- Each referenced field counts against Salesforce API limits\n- Fields must be accessible to the connected user\n- Performance impact increases with each additional relationship\n\nIMPORTANT: Referenced fields are retrieved via separate API calls,\nwhich can impact performance with large numbers of records or relationships.\nOnly include fields that are actually needed by your integration.\n","items":{"type":"string"}},"disabled":{"type":"boolean","description":"Controls whether this real-time event listener is active.\n\n**Field behavior**\n\nThis field enables/disables event processing:\n\n- OPTIONAL: Defaults to false if not specified\n- When true, prevents the export from processing any events\n- Preserves configuration while temporarily stopping execution\n- Can be toggled without removing the entire export\n\n**Use cases**\n\nThis field is particularly useful for:\n\n- Temporarily pausing real-time integration during maintenance\n- Testing event configuration without processing\n- Creating standby event handlers for disaster recovery\n- Controlling traffic during peak business periods\n\n**Implementation notes**\n\nWhen disabled (true):\n- Events are NOT queued - they are completely ignored\n- No data will flow through this export\n- The Salesforce triggers remain in place but are inactive\n- No impact on Salesforce performance or API limits\n\nIMPORTANT: When disabled, events that occur will NOT be processed retroactively\nwhen re-enabled. Consider using a delta export for catching up on missed changes\nafter extended disabled periods.\n"},"qualifier":{"type":"string","description":"A filter expression that determines which Salesforce events are processed.\n\n**Field behavior**\n\nThis field provides server-side filtering:\n\n- OPTIONAL: If omitted, all events for the object are processed\n- Uses Salesforce formula syntax for filtering\n- Evaluated before events are sent to the integration platform\n- Can reference any field on the triggering record\n\n**Implementation patterns**\n\n**Simple Field Comparisons**\n```\n\"Status__c = 'Approved'\"\n```\n- Processes events only when specific field values match\n- Most efficient filtering approach\n- Can use =, !=, >, <, >=, <= operators\n\n**Logical Conditions**\n```\n\"Amount > 1000 AND Status__c = 'New'\"\n```\n- Combines multiple conditions with AND, OR operators\n- Can use parentheses for complex grouping\n- Allows precise control over which events trigger the integration\n\n**Formula Functions**\n```\n\"CONTAINS(Description, 'Priority') OR ISCHANGED(Status__c)\"\n```\n- Uses Salesforce formula functions\n- ISCHANGED detects specific field modifications\n- ISNEW, ISDELETED detect record lifecycle events\n\n**Performance impact**\n\nThe qualifier is evaluated in Salesforce before sending events:\n- Reduces network traffic and processing\n- Lowers integration platform load\n- More efficient than filtering in a subsequent flow step\n- No additional API calls required\n\nIMPORTANT: The qualifier is evaluated using the Salesforce formula engine.\nUse valid Salesforce formula syntax and reference only fields that exist\non the primary object being monitored.\n"},"batchSize":{"type":"integer","description":"Controls how many records are processed together in each real-time batch.\n\n**Field behavior**\n\nThis field affects event processing efficiency:\n\n- OPTIONAL: Uses system default if not specified\n- Valid range: 4 to 200 records per batch\n- Affects how events are grouped before processing\n- Balance between latency and throughput\n\n**Performance considerations**\n\n**Smaller Batch Sizes (4-20)**\n```\n\"batchSize\": 10\n```\n- Lower latency - events processed more immediately\n- More overhead for small numbers of records\n- Better for time-sensitive operations\n- More resilient for complex record processing\n\n**Larger Batch Sizes (50-200)**\n```\n\"batchSize\": 100\n                    - Higher throughput - better efficiency for many records\n- Slight increase in processing delay\n- Better for high-volume operations\n- More efficient use of API calls and resources\n\n**Implementation guidance**\n\nChoose based on your volume and timing requirements:\n\n- For high-volume objects (many changes per minute): Use larger batches\n- For time-sensitive operations: Use smaller batches\n- For complex processing logic: Use smaller batches\n- For efficiency and throughput: Use larger batches\n\nIMPORTANT: The batch size doesn't limit how many records can be processed\nin total, only how they're grouped for processing. All events will eventually\nbe processed regardless of batch size.\n","minimum":4,"maximum":200},"skipExportFieldId":{"type":"string","description":"Specifies a boolean field that prevents integration loops in bidirectional sync.\n\n**Field behavior**\n\nThis field provides a loop prevention mechanism:\n\n- OPTIONAL: If omitted, no loop prevention is applied\n- Must reference a valid boolean/checkbox field on the object\n- Field must be updateable via the Salesforce API\n- System automatically manages the field's value\n\n**Implementation mechanism**\n\nThe loop prevention works as follows:\n\n1. When your integration updates a record in Salesforce\n2. The system temporarily sets this field to true\n3. The update triggers Salesforce's normal event system\n4. But events where this field is true are ignored\n5. The system automatically clears the field afterward\n\n**Use cases**\n\nThis field is critical for:\n\n- Bidirectional synchronization scenarios\n- Preventing infinite update loops\n- Implementing changes that flow both ways\n- Distinguishing between user changes and integration changes\n\n**Field requirements**\n\nThe field you specify must be:\n- A checkbox (Boolean) field in Salesforce\n- Created specifically for integration purposes\n- Not used by other business processes\n- Updateable by the integration user\n\nIMPORTANT: For bidirectional sync scenarios, this field is required.\nWithout it, updates from your integration would trigger events that\ncould create infinite loops between systems.\n"},"relatedLists":{"type":"array","description":"Configuration for retrieving child records related to the primary object.\n\n**Field behavior**\n\nThis field enables parent-child data synchronization:\n\n- OPTIONAL: If omitted, only the primary record is processed\n- Each array entry configures one related list/child object\n- Child records are included with their parent in the payload\n- Automatically retrieves child records when parent changes\n\n**Implementation context**\n\nThis feature allows you to:\n- Synchronize complete object hierarchies in real-time\n- Include child records when a parent record changes\n- Process parent-child data together in a single flow\n- Maintain relationships between objects across systems\n\n**Technical impact**\n\n- Each related list requires additional Salesforce API calls\n- Performance impact increases with each related list\n- Data volume can increase significantly with many children\n- Parent-child structures may require special handling in flows\n","items":{"type":"object","description":"Configuration for a single related list (child object) to include.\n\nEach object in this array defines how to retrieve one type of\nchild records related to the primary object. Multiple related lists\ncan be configured to retrieve different types of children.\n","properties":{"referencedFields":{"type":"array","description":"Specifies which fields to retrieve from the child records.\n\n**Field behavior**\n\nThis field selects child record fields:\n\n- REQUIRED for each related list configuration\n- Must contain valid API field names for the child object\n- Only listed fields will be retrieved from child records\n- Empty array will retrieve only Id field\n\n**Implementation guidance**\n\n- Include only fields needed by your integration\n- Always include key identifier fields\n- Consider relationship fields if needed\n- Balance between completeness and performance\n\nIMPORTANT: Each field increases data volume and processing time.\nOnly include fields that your integration actually needs to process.\n","items":{"type":"string"}},"parentField":{"type":"string","description":"Specifies the field on the child object that relates back to the parent.\n\n**Field behavior**\n\nThis field identifies the relationship:\n\n- REQUIRED for each related list configuration\n- Must be a lookup or master-detail field on the child object\n- References the parent object being exported\n- Used to construct the relationship query\n\n**Relationship field patterns**\n\n**Standard Relationships**\n```\n\"parentField\": \"AccountId\"\n```\n- For standard parent-child relationships\n- Field name typically ends with \"Id\"\n- References standard objects\n\n**Custom Relationships**\n```\n\"parentField\": \"Parent_Object__c\"\n```\n- For custom parent-child relationships\n- Field name typically ends with \"__c\"\n- References custom objects\n\n**Technical details**\n\nThe system uses this field to construct a query like:\n```\nSELECT [referencedFields] FROM [sObjectType]\nWHERE [parentField] = [parent record Id]\n```\n\nIMPORTANT: This must be the exact API name of the field on the child\nobject that creates the relationship to the parent, not the relationship\nname itself.\n"},"sObjectType":{"type":"string","description":"Specifies the API name of the child object to retrieve.\n\n**Field behavior**\n\nThis field identifies the child object type:\n\n- REQUIRED for each related list configuration\n- Must be a valid Salesforce API object name\n- Case-sensitive (match Salesforce naming exactly)\n- Can be standard or custom object\n\n**Object name patterns**\n\n**Standard Objects**\n```\n\"sObjectType\": \"Contact\"\n```\n- Standard Salesforce objects\n- No namespace or suffix\n- First letter capitalized\n\n**Custom Objects**\n```\n\"sObjectType\": \"Custom_Object__c\"\n```\n- Custom Salesforce objects\n- API name with \"__c\" suffix\n- Case-sensitive, including underscores\n\n**Relationship compatibility**\n\nThe sObjectType must:\n- Have a relationship field to the parent object\n- Be accessible to the connected user\n- Support standard SOQL queries\n\nIMPORTANT: Use the exact API name of the object, not its label.\nThis value is case-sensitive and must match Salesforce's naming exactly.\n"},"filter":{"type":"string","description":"Optional SOQL WHERE clause to filter which child records are included.\n\n**Field behavior**\n\nThis field adds filtering to child record retrieval:\n\n- OPTIONAL: If omitted, all related child records are included\n- Contains only the condition expression (without \"WHERE\" keyword)\n- Uses standard SOQL syntax for conditions\n- Applied in addition to the parent relationship filter\n\n**Filtering patterns**\n\n**Simple Condition**\n```\n\"filter\": \"IsActive = true\"\n```\n- Basic field comparison\n- Only active related records are included\n\n**Multiple Conditions**\n```\n\"filter\": \"Status__c = 'Open' AND Priority = 'High'\"\n```\n- Combined conditions with logical operators\n- Only records matching all conditions are included\n\n**Complex Filtering**\n```\n\"filter\": \"CreatedDate > LAST_N_DAYS:30 OR IsClosed = false\"\n```\n- Can use Salesforce date literals and functions\n- Can mix different types of conditions\n\n**Technical details**\n\nThe system appends this to the automatically generated relationship query:\n```\nSELECT [fields] FROM [sObjectType]\nWHERE [parentField] = [parent ID] AND ([filter])\n```\n\nIMPORTANT: Do not include the \"WHERE\" keyword in this field.\nOnly include the condition expression itself, as it will be combined\nwith the parent relationship condition automatically.\n"},"orderBy":{"type":"string","description":"Optional SOQL ORDER BY clause to sort the child records.\n\n**Field behavior**\n\nThis field controls child record ordering:\n\n- OPTIONAL: If omitted, order is determined by Salesforce\n- Contains only field and direction (without \"ORDER BY\" keywords)\n- Uses standard SOQL syntax for sorting\n- Applied to the child records query\n\n**Ordering patterns**\n\n**Single Field Ascending (Default)**\n```\n\"orderBy\": \"Name\"\n```\n- Sorts by a single field in ascending order\n- ASC is implied if not specified\n\n**Single Field Descending**\n```\n\"orderBy\": \"CreatedDate DESC\"\n```\n- Sorts by a single field in descending order\n- Must explicitly specify DESC\n\n**Multiple Fields**\n```\n\"orderBy\": \"Priority DESC, CreatedDate ASC\"\n```\n- Sorts by multiple fields in specified directions\n- Comma-separated list of fields with optional directions\n\n**Technical details**\n\nThe system appends this to the automatically generated relationship query:\n```\nSELECT [fields] FROM [sObjectType]\nWHERE [parentField] = [parent ID]\nORDER BY [orderBy]\n```\n\nIMPORTANT: Do not include the \"ORDER BY\" keywords in this field.\nOnly include the field names and sort directions, as they will be\nadded to the query with the proper syntax automatically.\n"}}}}}}}},"AS2":{"type":"object","description":"Configuration for AS2 (Applicability Statement 2) exports and listeners.\n\n**What is AS2?**\n\nApplicability Statement 2 (AS2) is a widely adopted protocol for securely and reliably transmitting\nEDI and other data types over the internet using HTTP/S, S/MIME encryption, and digital signatures.\nAS2 provides:\n\n- **Message integrity** through digital signature validation\n- **Confidentiality** via encryption with X.509 certificates\n- **Non-repudiation** via Message Disposition Notifications (MDNs)\n\n**As2 export configuration**\n\nIMPORTANT: When the _connectionId field points to a connection where the type is as2,\nthis object MUST be populated for the export to function properly. This is a required configuration\nfor all AS2 based exports, as determined by the connection associated with the export.\n\n**As2 listener functionality**\n\nAn AS2 listener is a flow step in Celigo designed to receive incoming AS2 transmissions\nand deliver them into a defined integration flow. It acts as the \"source\" of a flow—similar to\nhow a webhook listener works—except it specifically handles AS2 protocol requirements, including\ndecryption, signature verification, and MDN generation.\n\nUnlike periodic polling or scheduled exports, an AS2 listener functions in near real-time—when\na trading partner pushes an AS2 message, Celigo's listener step processes it instantly,\ngenerating an MDN in response to acknowledge receipt. This ensures low-latency, event-driven\nprocessing where each inbound AS2 transmission triggers the integration flow automatically.\n","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"Reference to a TradingPartnerConnector document.\n\n**Trading partner connector overview**\n\nA Trading Partner Connector in Celigo's integrator.io is a prebuilt, partner-specific integration\ntemplate that streamlines the setup and management of Electronic Data Interchange (EDI) transactions\nwith a designated trading partner. It encapsulates all requisite configurations:\n\n- Communication protocol (e.g., AS2, FTP/SFTP, VAN)\n- Document schemas (such as ANSI X12 or EDIFACT)\n- Mappings\n- Validation rules\n- Endpoint details\n\n**Benefits**\n\nBy referencing a Trading Partner Connector through this field, organizations:\n\n- Reduce manual setup time\n- Ensure compliance with specific partner requirements\n- Take advantage of Celigo's out-of-the-box EDI capabilities\n- Process transactions reliably and securely\n- Onboard new partners rapidly without building flows from scratch\n\nThis field is crucial for AS2 configurations as it links the export to all partner-specific\nsettings required for successful AS2 communication.\n"},"blob":{"type":"boolean","description":"- **Behavior**: Retrieves raw files without parsing them into structured data records.  Should only be used when the contents of the file will not be used in subsequent steps.\n- **UI Appearance**: \"Transfer\" flow step\n- **Required Config**: Configuration only available on AS2 and VAN exports (as2.blob = true)\n- **Use Case**: Raw file transfers for binary files or when parsing is handled downstream\n- **Important Note**: Use this when you want to handle the file as a raw blob without automatic parsing\n"}},"required":[]},"DynamoDB":{"type":"object","description":"Configuration object for Amazon DynamoDB data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a DynamoDB connection\nand must not be included for other connection types. It defines how data is extracted\nfrom DynamoDB tables, using query operations against NoSQL data structures.\n\n**Implementation requirements**\n\nThe DynamoDB object has the following requirements:\n\n- For basic exports:\n  Required fields: region, method, tableName, expressionAttributeNames, expressionAttributeValues, keyConditionExpression\n  Optional fields: filterExpression, projectionExpression\n\n- For Once exports (when export.type=\"once\"):\n  Additional required field: onceExportPartitionKey\n  Optional field: onceExportSortKey (for composite keys)\n","properties":{"region":{"type":"string","enum":["us-east-1","us-east-2","us-west-1","us-west-2","af-south-1","ap-east-1","ap-south-1","ap-northeast-1","ap-northeast-2","ap-northeast-3","ap-southeast-1","ap-southeast-2","ca-central-1","eu-central-1","eu-west-1","eu-west-2","eu-west-3","eu-south-1","eu-north-1","me-south-1","sa-east-1"],"description":"Specifies the AWS region where the DynamoDB table is located.\n\n**Field behavior**\n\nThis field determines where to connect to DynamoDB:\n\n- REQUIRED for all DynamoDB exports\n- Must match the region where your DynamoDB table is deployed\n- Select the same AWS region used in your database configuration\n- Ensures the integration can access your table\n","default":"us-east-1"},"method":{"type":"string","enum":["query"],"description":"Defines the DynamoDB operation method used to retrieve data.\n\n**Field behavior**\n\n- REQUIRED for all DynamoDB exports\n- Currently only supports \"query\" operations\n- Always set this value to \"query\"\n- Additional methods may be supported in future versions\n"},"tableName":{"type":"string","description":"Specifies the DynamoDB table from which to retrieve data.\n\n**Field behavior**\n\nThis field identifies the data source:\n\n- REQUIRED for all DynamoDB exports\n- Must be an exact match to an existing table name\n- Case-sensitive as per AWS naming conventions\n- Cannot be changed without recreating the export\n\n**Implementation patterns**\n\n**Standard Table Names**\n```\n\"tableName\": \"Customers\"\n```\n"},"keyConditionExpression":{"type":"string","description":"Defines the search criteria to determine which items to retrieve from DynamoDB.\n\n**Field behavior**\n\n- REQUIRED when method=\"query\"\n- Must include a condition on the partition key\n- Can optionally include conditions on the sort key\n- Uses placeholders defined in expressionAttributeNames and expressionAttributeValues\n\n**Common patterns**\n\n```\n\"#pk = :pkValue\"                                  // Partition key only\n\"#pk = :pkValue AND #sk = :skValue\"               // Exact match on partition and sort key\n\"#pk = :pkValue AND #sk BETWEEN :start AND :end\"  // Range query on sort key\n\"#pk = :pkValue AND begins_with(#sk, :prefix)\"    // Prefix match on sort key\n```\n\nPlaceholders with '#' reference attribute names, while ':' reference values.\n"},"filterExpression":{"type":"string","description":"Filters the results from a query based on non-key attributes.\n\n**Field behavior**\n\n- OPTIONAL: If omitted, all items matching the key condition are returned\n- Applied after the key condition but before returning results\n- Can reference any non-key attributes to further refine results\n- Uses placeholders defined in expressionAttributeNames and expressionAttributeValues\n\n**Examples**\n\n```\n\"#status = :active\"\n\"#status = :active AND #price > :minPrice\"\n\"contains(#tags, :tagValue)\"\n```\n\nRefer to the DynamoDB documentation for the complete list of valid operators and syntax.\n"},"projectionExpression":{"type":"array","items":{"type":"string"},"description":"Specifies which fields to return from each item in the results.\n\n**Field behavior**\n\n- OPTIONAL: If omitted, all fields are returned\n- Each array element represents a field to include\n- References attribute names defined in expressionAttributeNames\n- Reduces data transfer by returning only needed fields\n\n**Examples**\n\n```\n[\"#id\", \"#name\", \"#email\"]               // Basic fields\n[\"#id\", \"#profile.#firstName\"]           // Nested fields\n[\"#id\", \"#items[0]\", \"#items[1]\"]        // List elements\n```\n\nRefer to the DynamoDB documentation for more details on projection syntax.\n"},"expressionAttributeNames":{"type":"string","description":"Defines placeholders for attribute names used in expressions.\n\n**Field behavior**\n\n- REQUIRED when using expressions that reference attribute names\n- Must be a valid JSON string mapping placeholders to actual attribute names\n- Each placeholder must begin with a pound sign (#) followed by alphanumeric characters\n- Used in keyConditionExpression, filterExpression, and projectionExpression\n\n**Example**\n\n```\n\"{\\\"#pk\\\": \\\"customerId\\\", \\\"#status\\\": \\\"status\\\"}\"\n```\n\nThis maps the placeholder #pk to the actual attribute name \"customerId\" and #status to \"status\".\n\nRefer to the DynamoDB documentation for more details.\n"},"expressionAttributeValues":{"type":"string","description":"Defines placeholder values used in expressions for comparison.\n\n**Field behavior**\n\n- REQUIRED when using expressions that compare attribute values\n- Must be a valid JSON string mapping placeholders to actual values\n- Each placeholder must begin with a colon (:) followed by alphanumeric characters\n- Used in keyConditionExpression and filterExpression\n- Can contain static values or dynamic values with handlebars syntax\n\n**Example**\n\n```\n\"{\\\":customerId\\\": \\\"12345\\\", \\\":status\\\": \\\"ACTIVE\\\"}\"\n```\n\nThis maps the placeholder :customerId to the value \"12345\" and :status to \"ACTIVE\".\n\nRefer to the DynamoDB documentation for more details.\n"},"onceExportPartitionKey":{"type":"string","description":"Specifies the partition key attribute for identifying items in once exports.\n\n**Field behavior**\n\n- REQUIRED when export.type=\"once\"\n- Must specify the primary key that uniquely identifies each item in the table\n- Celigo uses this to track which items have been processed\n- After successful export, Celigo updates a tracking field in the database\n\nThis is needed for once exports to prevent duplicate processing of the same items\nin subsequent runs by marking them as processed.\n\nRefer to the DynamoDB documentation for more details on partition keys.\n"},"onceExportSortKey":{"type":"string","description":"Specifies the sort key attribute for identifying items in composite key tables.\n\n**Field behavior**\n\n- OPTIONAL: Only needed for tables with composite primary keys\n- Used together with onceExportPartitionKey for tables where items are identified by both keys\n- Celigo uses both keys to uniquely identify items that have been processed\n- For tables with only a partition key (simple primary key), leave this empty\n\nThis is only required if your DynamoDB table uses a composite primary key\n(partition key + sort key) to uniquely identify items.\n\nRefer to the DynamoDB documentation for more details on sort keys.\n"}}},"FTP":{"type":"object","description":"Configuration object for FTP/SFTP connection settings in export integrations.\n\nThis object is REQUIRED when the _connectionId field references an FTP/SFTP connection\nand must not be included for other connection types. It defines how to locate and retrieve\nfiles from FTP, FTPS, or SFTP servers.\n\nThe FTP export object has the following requirements:\n\n- Required fields: directoryPath\n- Optional fields: fileNameStartsWith, fileNameEndsWith, backupDirectoryPath, _tpConnectorId\n\n**Purpose**\n\nThis configuration specifies:\n- Which directory to retrieve files from\n- How to filter files by name patterns\n- Where to move files after retrieval (optional)\n- Any trading partner-specific connection settings\n","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"References a Trading Partner Connector for standardized B2B integrations.\n\n**Field behavior**\n\nThis field links to pre-configured trading partner settings:\n\n- OPTIONAL: If omitted, uses only the FTP connection details\n- References a Celigo Trading Partner Connector by _id\n- When specified, inherits partner-specific configurations\n"},"directoryPath":{"type":"string","description":"Directory on the FTP/SFTP server to retrieve files from.\n\n- REQUIRED for all FTP exports\n- Can be relative to login directory or absolute path\n- Supports handlebars templates (e.g., `archive/{{date 'YYYY-MM-DD'}}`)\n- Use forward slashes (/) regardless of server OS\n- Path is case-sensitive on UNIX/Linux servers\n\nIMPORTANT: The FTP user must have read permissions on this directory.\n"},"fileNameStartsWith":{"type":"string","description":"Optional prefix filter for filenames.\n\n- Filters files based on starting characters\n- Case-sensitive on most FTP servers\n- Can use static text or handlebars templates\n- Examples:\n  - `\"ORDER_\"` - matches ORDER_123.csv but not order_123.csv\n  - `\"INV_{{date 'YYYYMMDD'}}\"` - matches current date's invoices\n\nWhen used with fileNameEndsWith, files must match both criteria.\n"},"fileNameEndsWith":{"type":"string","description":"Optional suffix filter for filenames.\n\n- Commonly used to filter by file extension\n- Case-sensitive on most FTP servers\n- Examples:\n  - `\".csv\"` - retrieves only CSV files\n  - `\"_FINAL.xml\"` - retrieves only XML files with _FINAL suffix\n  - `\"_READY\"` - retrieves files with status indicator\n\nWhen used with fileNameStartsWith, files must match both criteria.\n"},"backupDirectoryPath":{"type":"string","description":"Optional directory where files are moved before deletion.\n\n- If omitted, files are deleted from the original location after successful export\n- Must be on the same FTP/SFTP server\n- Supports static paths or handlebars templates\n- Examples:\n  - `\"processed\"` - simple archive folder\n  - `\"archive/{{date 'YYYY/MM/DD'}}\"` - date-based hierarchy\n\nIMPORTANT: Celigo automatically deletes files from the source directory after\nsuccessful export. The backup directory is for users who want to maintain their\nown independent backup of exported files. Celigo also maintains its own backup\nof processed files for a set period of time.\n"}}},"JDBC":{"type":"object","description":"Configuration object for JDBC (Java Database Connectivity) data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a JDBC database connection\nand must not be included for other connection types. It defines how data is extracted\nfrom relational databases using SQL queries.\n\n**Jdbc export capabilities**\n- Execute custom SQL SELECT statements\n- Support for joins, aggregations, and functions\n- Flexible data retrieval from any accessible tables or views\n- Compatible with all major database systems\n**Critical:** WHAT BELONGS IN THIS OBJECT\n- `query` - **ALWAYS REQUIRED** - The SQL SELECT statement\n- `once` - **REQUIRED** when the export's Object Type is `\"once\"` (set _include_once: true)\n- **DO NOT** put `delta` inside this object - delta is handled via the query\n\n**Delta exports (type: \"delta\")**\nFor delta/incremental exports, do NOT populate a `delta` object inside `jdbc`.\nInstead, use `{{lastExportDateTime}}` or `{{currentExportDateTime}}` directly in the query:\n```json\n{\n  \"type\": \"delta\",\n  \"jdbc\": {\n    \"query\": \"SELECT * FROM customers WHERE updatedAt > {{lastExportDateTime}}\"\n  }\n}\n```\n\n**Once exports (type: \"once\")**\nFor once exports (mark records as processed), populate `jdbc.once.query`:\n```json\n{\n  \"type\": \"once\",\n  \"jdbc\": {\n    \"query\": \"SELECT * FROM customers WHERE exported = false\",\n    \"once\": {\n      \"query\": \"UPDATE customers SET exported = true WHERE id = {{record.id}}\"\n    }\n  }\n}\n```\n\n**Standard exports (type: null or not specified)**\nJust provide the query:\n```json\n{\n  \"jdbc\": {\n    \"query\": \"SELECT * FROM customers WHERE status = 'ACTIVE'\"\n  }\n}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - SQL SELECT query for retrieving data from the database.\n\nThis field contains the SQL SELECT statement that will be executed to fetch data\nfrom the database. The query can range from simple table selections to complex\njoins and aggregations.\n\nExamples:\n- Basic: `SELECT id, name, email FROM customers WHERE status = 'ACTIVE'`\n- Join: `SELECT o.id, c.name, o.amount FROM orders o JOIN customers c ON o.customer_id = c.id`\n- Aggregate: `SELECT category, COUNT(*) as count FROM orders GROUP BY category`\n- Parameterized: `SELECT * FROM orders WHERE customer_id = {{record.customer_id}}`\n\n**For delta exports (when top-level type is \"delta\")**\nInclude `{{lastExportDateTime}}` or `{{currentExportDateTime}}` in the WHERE clause:\n- `SELECT * FROM customers WHERE updatedAt > {{lastExportDateTime}}`\n- `SELECT * FROM orders WHERE modified_date >= {{lastExportDateTime}}`\n"},"once":{"type":"object","description":"**CRITICAL: REQUIRED when the export's Object Type is \"once\".**\n\nIf Object Type is \"once\", you MUST set _include_once to true (or include this object).\n\nThis object has ONLY ONE property: \"query\" (a SQL UPDATE string).\nDO NOT create any other properties like \"update\", \"table\", \"set\", \"where\", etc.\n\nCORRECT format:\n```json\n{\"query\": \"UPDATE customers SET exported=true WHERE id={{record.id}}\"}\n```\n\nWRONG format (DO NOT DO THIS):\n```json\n{\"update\": {\"table\": \"customers\", \"set\": {...}}}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - A SQL UPDATE statement string to mark records as processed.\n\nThis is a plain SQL UPDATE query string, NOT a structured object.\n\nThe query runs AFTER each record is successfully exported, setting a flag\nto indicate the record has been processed.\n\nFormat: \"UPDATE <table> SET <column>=<value> WHERE <id_column>={{record.<id_field>}}\"\n\nExample: \"UPDATE customers SET exported=true WHERE id={{record.id}}\"\n\nThe {{record.id}} placeholder is replaced with the actual record ID from each exported row.\n"}}}}},"MongoDB":{"type":"object","description":"Configuration object for MongoDB data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a MongoDB connection\nand must not be included for other connection types. It defines how documents are\nretrieved from MongoDB collections for processing in integrations.\n\nMongoDB exports currently support the following operational modes:\n- Retrieves documents from specified collections\n- Filters documents based on query criteria\n- Selects specific fields with projections\n- Provides NoSQL flexibility with JSON query syntax\n","properties":{"method":{"type":"string","enum":["find"],"description":"Specifies the MongoDB operation to perform when retrieving data.\n\n**Field behavior**\n\nThis field defines the query approach:\n\n- REQUIRED for all MongoDB exports\n- Currently only supports \"find\" operations\n- Determines how other parameters are interpreted\n- Corresponds to MongoDB's db.collection.find() method\n- Future versions may support additional methods\n\n**Query method types**\n\n**Find Method**\n```\n\"method\": \"find\"\n```\n\n- **Behavior**: Retrieves documents from a collection based on filter criteria\n- **MongoDB Equivalent**: db.collection.find(filter, projection)\n- **Required Parameters**: collection\n- **Optional Parameters**: filter, projection\n- **Use Cases**: Standard document retrieval, filtered queries, field selection\n\n**Technical considerations**\n\nThe method selection influences:\n- What other fields must be provided\n- How the query will be executed against MongoDB\n- What indexing strategies should be applied\n- Performance characteristics of the operation\n\nIMPORTANT: While only \"find\" is currently supported, the schema is designed\nfor future expansion to include other MongoDB operations like \"aggregate\"\nfor more complex data transformations and aggregations.\n"},"collection":{"type":"string","description":"Specifies the MongoDB collection to query for documents.\n\n**Field behavior**\n\nThis field identifies the data source:\n\n- REQUIRED for all MongoDB exports\n- Must reference a valid collection in the MongoDB database\n- Case-sensitive according to MongoDB collection naming\n- The primary container for documents to be retrieved\n"},"filter":{"type":"string","description":"Defines query criteria for selecting documents from the collection.\n\n**Field behavior**\n\nThis field narrows document selection:\n\n- OPTIONAL: If omitted, all documents in the collection are returned\n- Contains a MongoDB query document as a JSON string\n- Supports all standard MongoDB query operators\n- Provides precise control over which documents are retrieved\n\n**Query patterns**\n\n**Simple Equality Query**\n```\n\"filter\": \"{\"status\": \"active\"}\"\n```\n\n- **Behavior**: Returns only documents where status equals \"active\"\n- **MongoDB Equivalent**: db.collection.find({\"status\": \"active\"})\n- **Matching Documents**: {\"_id\": 1, \"status\": \"active\", \"name\": \"Example\"}\n- **Use Cases**: Status filtering, category selection, type filtering\n\n**Comparison Operator Query**\n```\n\"filter\": \"{\"createdDate\": {\"$gt\": \"2023-01-01T00:00:00Z\"}}\"\n```\n\n- **Behavior**: Returns documents created after January 1, 2023\n- **MongoDB Equivalent**: db.collection.find({\"createdDate\": {\"$gt\": \"2023-01-01T00:00:00Z\"}})\n- **Operators**: $eq, $gt, $gte, $lt, $lte, $ne, $in, $nin\n- **Use Cases**: Date ranges, numeric thresholds, incremental processing\n\n**Logical Operator Query**\n```\n\"filter\": \"{\"$or\": [{\"status\": \"pending\"}, {\"status\": \"processing\"}]}\"\n```\n\n- **Behavior**: Returns documents with either pending or processing status\n- **MongoDB Equivalent**: db.collection.find({\"$or\": [{\"status\": \"pending\"}, {\"status\": \"processing\"}]})\n- **Operators**: $and, $or, $nor, $not\n- **Use Cases**: Multiple conditions, alternative criteria, complex filtering\n\n**Nested Document Query**\n```\n\"filter\": \"{\"address.country\": \"USA\"}\"\n```\n\n- **Behavior**: Returns documents where the nested country field equals \"USA\"\n- **MongoDB Equivalent**: db.collection.find({\"address.country\": \"USA\"})\n- **Dot Notation**: Accesses nested document fields\n- **Use Cases**: Nested data filtering, object property matching\n\n**Handlebars Template Query**\n```\n\"filter\": \"{\"customerId\": \"{{record.customer_id}}\", \"status\": \"{{record.status}}\"}\"\n```\n\n- **Behavior**: Dynamically filters based on record field values\n- **MongoDB Equivalent**: db.collection.find({\"customerId\": \"123\", \"status\": \"active\"})\n- **Template Variables**: Values replaced at runtime with actual record data\n- **Use Cases**: Dynamic filtering, context-aware queries, relational lookups\n\n**Incremental Processing Query**\n```\n\"filter\": \"{\"lastModified\": {\"$gt\": \"{{lastRun}}\"}}\"\n```\n\n- **Behavior**: Returns only documents modified since last execution\n- **MongoDB Equivalent**: db.collection.find({\"lastModified\": {\"$gt\": \"2023-06-15T10:30:00Z\"}})\n- **System Variables**: {{lastRun}} replaced with timestamp of previous execution\n- **Use Cases**: Change data capture, delta synchronization, incremental updates\n"},"projection":{"type":"string","description":"Controls which fields are included or excluded from returned documents.\n\n**Field behavior**\n\nThis field optimizes data retrieval:\n\n- OPTIONAL: If omitted, all fields are returned\n- Contains a MongoDB projection document as a JSON string\n- Can include fields (1) or exclude fields (0), but not both (except _id)\n- Helps minimize data transfer by selecting only needed fields\n\n**Projection patterns**\n\n**Field Inclusion Projection**\n```\n\"projection\": \"{\"name\": 1, \"email\": 1, \"_id\": 0}\"\n```\n\n- **Behavior**: Returns only name and email fields, excludes _id\n- **MongoDB Equivalent**: db.collection.find({}, {\"name\": 1, \"email\": 1, \"_id\": 0})\n- **Result Format**: {\"name\": \"Example\", \"email\": \"user@example.com\"}\n- **Use Cases**: Specific field selection, minimizing payload size\n\n**Field Exclusion Projection**\n```\n\"projection\": \"{\"password\": 0, \"internal_notes\": 0}\"\n```\n\n- **Behavior**: Returns all fields except password and internal_notes\n- **MongoDB Equivalent**: db.collection.find({}, {\"password\": 0, \"internal_notes\": 0})\n- **Result Impact**: Removes sensitive or unnecessary fields\n- **Use Cases**: Security filtering, removing large fields, data protection\n\n**Nested Field Projection**\n```\n\"projection\": \"{\"profile.firstName\": 1, \"profile.lastName\": 1, \"orders\": 1, \"_id\": 0}\"\n```\n\n- **Behavior**: Returns only specific nested fields and the orders array\n- **MongoDB Equivalent**: db.collection.find({}, {\"profile.firstName\": 1, \"profile.lastName\": 1, \"orders\": 1, \"_id\": 0})\n- **Dot Notation**: Accesses specific nested document fields\n- **Use Cases**: Partial nested document selection, specific array inclusion\n\n**Technical considerations**\n\n- Maximum size: 128KB\n- Must be a valid JSON string representing a MongoDB projection\n- Cannot mix inclusion and exclusion modes (except _id field)\n- _id field is included by default unless explicitly excluded\n- Projection does not affect which documents are returned, only their fields\n\nIMPORTANT: When working with nested documents or arrays, be aware that including\na specific field path does not automatically include parent documents or arrays.\nFor example, including \"addresses.zipcode\" will only return that specific field,\nnot the entire addresses array or documents within it.\n"}}},"NetSuite":{"type":"object","description":"Configuration object for NetSuite data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a NetSuite connection\nand must not be included for other connection types. It defines how data is extracted\nfrom NetSuite, including saved searches, RESTlets, and distributed/SuiteApp exports.\n\n**Netsuite export modes**\n\nNetSuite exports support several operating modes:\n\n1. **Saved Search Exports** - Uses NetSuite saved searches to retrieve data\n2. **RESTlet Exports** - Uses custom RESTlet scripts for data retrieval\n3. **Distributed Exports** - Uses SuiteApp for real-time or batch processing\n4. **Blob Exports** - Retrieves files from the NetSuite file cabinet and transfers them WITHOUT parsing them into records (raw binary transfer)\n5. **File Exports** - Retrieves files from the NetSuite file cabinet and PARSES them into records (CSV, XML, JSON, etc.)\n\n**Critical:** Blob vs File Export Configuration\n\nThe export `type` field at the top level determines whether file content is parsed:\n\n- **For Blob Exports (no parsing)**: Set the export's `type: \"blob\"` AND configure `netsuite.blob`\n- **For File Exports (with parsing)**: Leave the export's `type` as null/undefined AND configure `netsuite.file`\n\nDo NOT set `type: \"blob\"` when you want file content parsed into records. The \"blob\" type is specifically for raw file transfers without any parsing.\n\n**Implementation requirements**\n\n- For saved search exports: Configure the `searches` or `type` properties\n- For RESTlet exports: Configure the `restlet` property with script details\n- For distributed exports: Configure the `distributed` property\n- For blob exports (no parsing): Set export `type: \"blob\"` and configure `netsuite.blob`\n- For file exports (with parsing): Leave export `type` null and configure `netsuite.file`\n","properties":{"type":{"type":"string","enum":["search","basicSearch","metadata","selectoption","restlet","getList","getServerTime","distributed","file"],"description":"Specifies the NetSuite export operation type. This determines how data is retrieved from NetSuite.\n\n**Critical:** File exports vs Blob exports\n\n- **File exports (with parsing)**: Set netsuite.type to \"file\" and configure netsuite.file.folderInternalId\n- **Blob exports (raw transfer, no parsing)**: Leave netsuite.type BLANK/null, set the export's top-level type to \"blob\", and configure netsuite.internalId\n\nDo NOT set netsuite.type to \"file\" for blob exports. For blob exports, this property should be omitted or null.\n\n**Recommended types**\n\n- **For Lookups (isLookup: true)**:\n    - **PREFER \"restlet\"**: This allows you to use `suiteapp2.0` saved searches with dynamic inputs easily.\n    - **AVOID \"search\"**: Standard search type is often limited for dynamic lookups.\n\n**Valid values**\n- \"search\" - Use a saved search to retrieve records\n- \"basicSearch\" - Use a basic search query\n- \"metadata\" - Retrieve record metadata\n- \"selectoption\" - Retrieve select options for a field\n- \"restlet\" - Use a RESTlet for custom data retrieval\n- \"getList\" - Retrieve a list of records by internal IDs\n- \"getServerTime\" - Get the NetSuite server time\n- \"distributed\" - Use distributed/SuiteApp for real-time exports\n- \"file\" - Export files from NetSuite file cabinet WITH parsing into records\n- null/omitted - For blob exports or other export types\n\n**Implementation guidance**\n- For file exports WITH parsing: Set netsuite.type to \"file\" and configure netsuite.file.folderInternalId\n- For blob exports (no parsing): Leave netsuite.type blank, set export type to \"blob\", configure netsuite.internalId\n- For saved search exports: Set type to \"search\" and configure netsuite.searches\n- For RESTlet exports: Set type to \"restlet\" and configure netsuite.restlet\n- For distributed/real-time exports: Set type to \"distributed\" and configure netsuite.distributed\n\n**Examples**\n- \"file\" - For file cabinet exports with parsing\n- \"search\" - For saved search exports\n- null - For blob exports (raw file transfer without parsing)"},"searches":{"type":"array","description":"An array of search configurations used to query and retrieve data from NetSuite.\nEach search object defines a saved search or ad-hoc query configuration.\n\n**Structure**\nEach item in the array is an object with the following properties:\n- savedSearchId: The internal ID of a saved search in NetSuite (string)\n- recordType: The NetSuite record type being searched (string, e.g., \"customer\", \"salesorder\")\n- criteria: Array of search criteria/filters (optional)\n\n**Examples**\n```json\n[\n  {\n    \"savedSearchId\": \"10\",\n    \"recordType\": \"customer\",\n    \"criteria\": []\n  }\n]\n```\n\n**Implementation guidance**\n- Use savedSearchId to reference an existing saved search in NetSuite\n- recordType should match a valid NetSuite record type\n- criteria can be used to add additional filters to the search","items":{"type":"object","properties":{"savedSearchId":{"type":"string","description":"The internal ID of a saved search in NetSuite"},"recordType":{"type":"string","description":"The lowercase script ID of the NetSuite record type being searched.\n\nMust be the exact lowercase script ID as defined in NetSuite (e.g., \"customer\", \"salesorder\", \"invoice\", \"vendorbill\").\nThis is NOT the display name - use the script ID which is always lowercase with no spaces."},"criteria":{"type":"array","description":"Array of search criteria/filters to apply","items":{"type":"object"}}}}},"metadata":{"type":"object","description":"metadata: >\n  A collection of key-value pairs that provide additional contextual information about the NetSuite entity. This metadata can include custom attributes, tags, or any supplementary data that helps to describe, categorize, or operationally enhance the entity beyond its standard properties. It serves as an extensible mechanism to store user-defined or system-generated information that is not part of the core entity schema, enabling greater flexibility and customization in managing NetSuite data.\n  **Field behavior**\n  - Stores arbitrary additional information related to the NetSuite entity, enhancing its descriptive or operational context.\n  - Can include custom fields defined by users, system-generated tags, flags, timestamps, or nested structured data.\n  - Typically represented as a dictionary or map with string keys and values that may be strings, numbers, booleans, arrays, or nested objects.\n  - Metadata entries are optional and do not affect the core entity behavior unless explicitly integrated with business logic.\n  - Supports dynamic addition, update, and removal of metadata entries without impacting the primary entity schema.\n  **Implementation guidance**\n  - Ensure all metadata keys are unique within the collection to prevent accidental overwrites.\n  - Support flexible and heterogeneous value types, including primitive types and nested structures, to accommodate diverse metadata needs.\n  - Validate keys and values against naming conventions, length restrictions, and allowed character sets to maintain consistency and prevent errors.\n  - Implement efficient mechanisms for CRUD (Create, Read, Update, Delete) operations on metadata entries to facilitate easy management.\n  - Consider indexing frequently queried metadata keys for performance optimization.\n  - Provide clear documentation or schema definitions for any standardized or commonly used metadata keys.\n  **Examples**\n  - {\"department\": \"Sales\", \"region\": \"EMEA\", \"priority\": \"high\"}\n  - {\"customField1\": \"value1\", \"customFlag\": true}\n  - {\"tags\": [\"urgent\", \"review\"], \"lastUpdatedBy\": \"user123\"}\n  - {\"approvalStatus\": \"pending\", \"reviewCount\": 3, \"metadataVersion\": 2}\n  - {\"nestedInfo\": {\"createdBy\": \"admin\", \"createdAt\": \"2024-05-01T12:00:00Z\"}}\n  **Important notes**\n  - Metadata is optional and should not interfere with the core functionality or validation of the NetSuite entity.\n  - Modifications to metadata typically do not trigger business workflows or logic unless explicitly configured to do so."},"selectoption":{"type":"object","description":"selectoption: >\n  Represents a selectable option within a NetSuite field, typically used in dropdown menus, radio buttons, or other selection controls. Each selectoption consists of a user-friendly label and an associated value that uniquely identifies the option internally. This structure enables consistent data entry, filtering, and categorization within NetSuite forms and records.\n  **Field behavior**\n  - Defines a single, discrete choice available to users in selection interfaces such as dropdowns, radio buttons, or multi-select lists.\n  - Can be part of a collection of options presented to the user for making a selection.\n  - Includes both a display label (visible to users) and a corresponding value (used internally or in API interactions).\n  - Supports filtering, categorization, and conditional logic based on the selected option.\n  - May be dynamically generated or statically defined depending on the field configuration.\n  **Implementation guidance**\n  - Assign a unique and stable value to each selectoption to prevent ambiguity and maintain data integrity.\n  - Use clear, concise, and user-friendly labels that accurately describe the option’s meaning.\n  - Validate option values against expected data types and formats to ensure compatibility with backend processing.\n  - Implement localization strategies for labels to support multiple languages without altering the underlying values.\n  - Consistently apply selectoption structures across all fields requiring predefined choices to standardize user experience.\n  - Consider accessibility best practices when designing labels and selection controls.\n  **Examples**\n  - { label: \"Active\", value: \"1\" }\n  - { label: \"Inactive\", value: \"2\" }\n  - { label: \"Pending Approval\", value: \"3\" }\n  - { label: \"High Priority\", value: \"high\" }\n  - { label: \"Low Priority\", value: \"low\" }\n  **Important notes**\n  - The label is intended for display purposes and may be localized; the value is the definitive identifier used in data processing and API calls.\n  - Values should remain consistent over time to avoid breaking integrations or corrupting data.\n  - When supporting multiple languages, labels should be translated appropriately while keeping values unchanged.\n  - Changes to selectoption values or labels should be managed carefully to prevent unintended side effects.\n  - Selectoption entries may be influenced by the context of the parent record, user roles, or permissions.\n  **Dependency chain**\n  - Utilized within field definitions that support selection inputs (e"},"customFieldMetadata":{"type":"object","description":"customFieldMetadata: Metadata information related to custom fields defined within the NetSuite environment, providing comprehensive details about each custom field's configuration, behavior, and constraints to facilitate accurate data handling and UI generation.\n**Field behavior**\n- Contains detailed metadata about custom fields, including their definitions, types, configurations, and constraints.\n- Provides contextual information necessary for understanding, validating, and manipulating custom fields programmatically.\n- May include attributes such as field ID, label, data type, default values, validation rules, display settings, sourcing information, and field dependencies.\n- Used to dynamically interpret or generate UI elements, data validation logic, or data structures based on custom field configurations.\n- Reflects the current state of custom fields as defined in the NetSuite account, enabling synchronization between the API consumer and the NetSuite environment.\n**Implementation guidance**\n- Ensure that the metadata accurately reflects the current state of custom fields in the NetSuite account by synchronizing regularly or on configuration changes.\n- Update the metadata whenever custom fields are added, modified, or removed to maintain consistency and prevent data integrity issues.\n- Use this metadata to validate input data against custom field constraints (e.g., data type, required status, allowed values) before processing or submission.\n- Consider caching metadata for performance optimization but implement mechanisms to refresh it periodically or on-demand to capture updates.\n- Handle cases where customFieldMetadata might be null, incomplete, or partially loaded gracefully, including fallback logic or error handling.\n- Respect user permissions and access controls when retrieving or exposing custom field metadata to ensure compliance with security policies.\n**Examples**\n- A custom field metadata object describing a custom checkbox field with ID \"custfield_123\", label \"Approved\", default value false, and display type \"inline\".\n- Metadata for a custom list/record field specifying the list of valid options, their internal IDs, and whether multiple selections are allowed.\n- Information about a custom date field including its date format, minimum and maximum allowed dates, and any validation rules applied.\n- Metadata describing a custom currency field with precision settings and default currency.\n**Important notes**\n- The structure and content of customFieldMetadata may vary depending on the NetSuite configuration, customizations, and API version.\n- Access to custom field metadata may require appropriate permissions within the NetSuite environment; unauthorized access may result in incomplete or no metadata.\n- Changes to custom fields in NetSuite (such as renaming, deleting, or changing data types) can impact the metadata and"},"skipGrouping":{"type":"boolean","description":"skipGrouping: Indicates whether to bypass the grouping of related records or transactions during processing, allowing each item to be handled individually rather than aggregated into groups.\n\n**Field behavior**\n- When set to true, the system processes each record or transaction independently, without combining them into groups based on shared attributes.\n- When set to false or omitted, related records or transactions are aggregated according to predefined grouping criteria (e.g., by customer, date, or transaction type) before processing.\n- Influences how data is structured, summarized, and reported in outputs or passed to downstream systems.\n- Affects the level of detail and granularity available in the processed data.\n\n**Implementation guidance**\n- Utilize this flag to control processing granularity, especially when detailed, record-level analysis or reporting is required.\n- Confirm that downstream systems, reports, or integrations can accommodate ungrouped data if skipGrouping is enabled.\n- Assess the potential impact on system performance and data volume, as disabling grouping may significantly increase the number of processed items.\n- Consider the use case carefully: grouping is generally preferred for summary reports, while skipping grouping suits detailed audits or troubleshooting.\n\n**Examples**\n- skipGrouping: true — Processes each transaction separately, providing detailed, unaggregated data.\n- skipGrouping: false — Groups transactions by customer or date, producing summarized results.\n- skipGrouping omitted — Defaults to grouping enabled, aggregating related records.\n\n**Important notes**\n- Enabling skipGrouping can lead to increased processing time, higher memory usage, and larger output datasets.\n- Some reports, dashboards, or integrations may require grouped data; verify compatibility before enabling this option.\n- The default behavior is typically grouping enabled (skipGrouping = false) unless explicitly overridden.\n- Changes to this setting may affect data consistency and comparability with previously generated reports.\n\n**Dependency chain**\n- Often depends on other properties that define grouping keys or criteria (e.g., groupBy fields).\n- May interact with filtering, sorting, or pagination settings within the processing pipeline.\n- Could influence or be influenced by aggregation functions or summary calculations applied downstream.\n\n**Technical details**\n- Data type: Boolean.\n- Default value: false (grouping enabled).\n- Implemented as a conditional flag checked during the data aggregation phase.\n- When true, bypasses aggregation logic and processes each record individually.\n- Typically integrated into the processing workflow to toggle between grouped and ungrouped data handling."},"statsOnly":{"type":"boolean","description":"statsOnly indicates whether the API response should include only aggregated statistical summary data without any detailed individual records. This property is used to optimize response size and improve performance when detailed data is unnecessary.\n\n**Field behavior**\n- When set to true, the API returns only summary statistics such as counts, averages, sums, or other aggregate metrics.\n- When set to false or omitted, the response includes both detailed data records and the associated statistical summaries.\n- Helps reduce network bandwidth and processing time by excluding verbose record-level data.\n- Primarily intended for use cases like dashboards, reports, or monitoring tools where only high-level metrics are required.\n\n**Implementation guidance**\n- Default the value to false to ensure full data retrieval unless explicitly requesting summary-only data.\n- Validate that the input is a boolean to prevent unexpected API behavior.\n- Use this flag selectively in scenarios where detailed records are not needed to avoid loss of critical information.\n- Ensure the API endpoint supports this flag before usage, as some endpoints may not implement statsOnly functionality.\n- Adjust client-side logic to handle different response structures depending on the flag’s value.\n\n**Examples**\n- statsOnly: true — returns only aggregated statistics such as total counts, averages, or sums without any detailed entries.\n- statsOnly: false — returns full detailed data records along with statistical summaries.\n- statsOnly omitted — defaults to false, returning detailed data and statistics.\n\n**Important notes**\n- Enabling statsOnly disables access to individual record details, which may limit in-depth data analysis.\n- The response schema changes significantly when statsOnly is true; clients must handle these differences gracefully.\n- Some API endpoints may not support this property; verify compatibility in the API documentation.\n- Pagination parameters may be ignored or behave differently when statsOnly is enabled, since detailed records are excluded.\n\n**Dependency chain**\n- May interact with filtering, sorting, or date range parameters that influence the statistical data returned.\n- Can affect pagination logic because detailed records are omitted when statsOnly is true.\n- Dependent on the API endpoint’s support for summary-only responses.\n\n**Technical details**\n- Data type: Boolean.\n- Default value: false.\n- Typically implemented as a query parameter or part of the request payload depending on API design.\n- Alters the response payload structure by excluding detailed record arrays and including only aggregated metrics.\n- Helps optimize API performance and reduce response payload size in scenarios where detailed data is unnecessary."},"internalId":{"type":"string","description":"The internal ID of a specific file in the NetSuite file cabinet to export.\n\n**Critical:** Required for blob exports\n\nThis property is REQUIRED when the export type is \"blob\". For blob exports, you must specify the internalId of the file to export from the NetSuite file cabinet.\n\n**Field behavior**\n- Identifies a specific file in the NetSuite file cabinet by its internal ID\n- Required for blob exports (raw binary file transfers without parsing)\n- The file at this internal ID will be exported as-is without parsing\n\n**Implementation guidance**\n- For blob exports: Set netsuite.type to \"blob\" (on the export, not netsuite) and provide netsuite.internalId\n- Obtain the file's internalId from NetSuite's file cabinet or via API\n- Validate the internalId corresponds to an existing file before export\n\n**Examples**\n- \"12345\" - Internal ID of a specific file\n- \"67890\" - Another file internal ID\n\n**Important notes**\n- This is different from netsuite.file.folderInternalId which specifies a folder for file exports with parsing\n- For blob exports: Use netsuite.internalId (file ID)\n- For file exports with parsing: Use netsuite.file.folderInternalId (folder ID)"},"blob":{"type":"object","properties":{"purgeFileAfterExport":{"type":"boolean","description":"purgeFileAfterExport: Whether to delete the file from the system after it has been successfully exported. This property controls the automatic removal of the source file post-export to help manage storage and maintain system cleanliness.\n\n**Field behavior**\n- Determines if the exported file should be removed from the source storage immediately after a successful export operation.\n- When set to `true`, the system deletes the file as soon as the export completes without errors.\n- When set to `false` or omitted, the file remains intact in its original location after export.\n- Facilitates automated cleanup of files to prevent unnecessary storage consumption.\n- Does not affect the export process itself; deletion occurs only after confirming export success.\n\n**Implementation guidance**\n- Confirm that the export operation has fully completed and succeeded before initiating file deletion to avoid data loss.\n- Verify that the executing user or system has sufficient permissions to delete files from the source location.\n- Assess downstream workflows or processes that might require access to the file after export before enabling purging.\n- Implement logging or notification mechanisms to record when files are purged for audit trails and troubleshooting.\n- Consider integrating with retention policies or backup systems to prevent accidental loss of important data.\n\n**Examples**\n- `true` — The file will be deleted immediately after a successful export.\n- `false` — The file will remain in the source location after export.\n- Omitted or `null` — Defaults to `false`, meaning the file is retained post-export.\n\n**Important notes**\n- File deletion is permanent and cannot be undone; ensure that the file is no longer needed before enabling this option.\n- Use caution in multi-user or multi-process environments where files may be shared or required beyond the export operation.\n- Immediate purging may interfere with backup, archival, or compliance requirements if files are deleted too soon.\n- Consider implementing safeguards or confirmation steps if enabling automatic purging in production environments.\n\n**Dependency chain**\n- Relies on the successful completion of the export operation to trigger file deletion.\n- Dependent on file system permissions and access controls to allow deletion.\n- May be affected by other system settings related to file retention, archival, or cleanup policies.\n- Could interact with error handling mechanisms to prevent deletion if export fails or is incomplete.\n\n**Technical details**\n- Typically represented as a boolean value (`true` or `false`).\n- Default behavior is to retain files unless explicitly set to `true`.\n- Deletion should be performed using secure"}},"description":"blob: Configuration for retrieving raw binary files from NetSuite file cabinet WITHOUT parsing them into records. Use this for binary file transfers (images, PDFs, executables) where the file content should be transferred as-is.\n\n**Critical:** Blob export configuration\n\nFor blob exports, configure:\n1. Set the export's top-level `type` to \"blob\"\n2. Set `netsuite.internalId` to the file's internal ID\n3. Leave `netsuite.type` blank/null (do NOT set it to \"file\")\n4. Optionally configure `netsuite.blob.purgeFileAfterExport`\n\n**When to use blob vs file**\n- Blob exports: Raw binary transfer WITHOUT parsing - leave netsuite.type blank\n- File exports: Parse file contents into records - set netsuite.type to \"file\"\n\nDo NOT use blob configuration when you want file content parsed into data records.\n\n**Field behavior**\n- Stores raw binary data including files, images, audio, video, or any non-textual content.\n- Supports download operations for binary content from the NetSuite file cabinet.\n- File content is transferred as-is without any parsing or transformation.\n- May be immutable or mutable depending on the specific NetSuite entity and operation.\n- Requires careful handling to maintain data integrity during transmission and storage.\n\n**Implementation guidance**\n- Always encode binary data (e.g., using base64) when transmitting over text-based protocols such as JSON or XML to ensure data integrity.\n- Validate the size of the blob against NetSuite API limits and storage constraints to prevent errors or truncation.\n- Implement secure handling practices, including encryption in transit and at rest, to protect sensitive binary data.\n- Use appropriate MIME/content-type headers when uploading or downloading blobs to correctly identify the data format.\n- Consider chunked uploads/downloads or streaming for large blobs to optimize performance and resource usage.\n- Ensure consistent encoding and decoding mechanisms between client and server to avoid data corruption.\n\n**Examples**\n- A base64-encoded PDF document attached to a NetSuite customer record.\n- An image file (PNG or JPEG) stored as a blob for product catalog entries.\n- A binary export of transaction data in a proprietary format used for integration with external systems.\n- Audio or video files associated with marketing campaigns or training materials.\n- Encrypted binary blobs containing sensitive configuration or credential data.\n\n**Important notes**\n- Blob size may be limited by NetSuite API constraints or underlying storage capabilities; exceeding these limits can cause failures.\n- Encoding and decoding must be consistent and correctly implemented to prevent data corruption or loss.\n- Large blobs should be handled using chunked or streamed transfers to avoid memory issues and improve reliability.\n- Security is paramount; blobs may contain sensitive information requiring encryption and strict access controls.\n- Access to blob data typically requires proper authentication and authorization aligned with NetSuite’s security model.\n\n**Dependency chain**\n- Dependent on authentication and authorization mechanisms"},"restlet":{"type":"object","properties":{"recordType":{"type":"string","description":"recordType specifies the type of NetSuite record that the RESTlet will interact with. This property determines the schema, validation rules, and operations applicable to the record within the NetSuite environment, directly influencing how data is processed and managed by the RESTlet.\n\n**Field behavior**\n- Defines the specific NetSuite record type (e.g., customer, salesOrder, invoice) targeted by the RESTlet.\n- Influences the structure and format of data payloads sent to and received from the RESTlet.\n- Controls validation rules, mandatory fields, and available operations based on the selected record type.\n- Affects permissions and access controls enforced during RESTlet execution, ensuring compliance with NetSuite security settings.\n- Determines the applicable business logic and workflows triggered by the RESTlet for the specified record type.\n\n**Implementation guidance**\n- Use the exact internal ID or script ID of the NetSuite record type as recognized by the NetSuite system to ensure accurate targeting.\n- Validate the recordType value against the list of supported NetSuite record types to prevent runtime errors and ensure compatibility.\n- Confirm that the RESTlet script has the necessary permissions and roles assigned to access and manipulate the specified record type.\n- When handling multiple record types dynamically, implement conditional logic to accommodate differences in data structure and processing requirements.\n- For custom record types, always use the script ID format (e.g., \"customrecord_myCustomRecord\") to avoid ambiguity.\n- Test RESTlet behavior thoroughly after changing the recordType to verify correct handling of data and operations.\n\n**Examples**\n- \"customer\"\n- \"salesOrder\"\n- \"invoice\"\n- \"employee\"\n- \"customrecord_myCustomRecord\"\n- \"vendor\"\n- \"purchaseOrder\"\n\n**Important notes**\n- The recordType must correspond to a valid and supported NetSuite record type; invalid values will cause API calls to fail.\n- Custom record types require referencing by their script IDs, which typically start with \"customrecord_\".\n- Modifying the recordType may necessitate updates to the RESTlet’s codebase to handle different data schemas and business logic.\n- Permissions and role restrictions in NetSuite can limit access to certain record types, impacting RESTlet functionality.\n- Consistency in recordType usage is critical for maintaining data integrity and predictable RESTlet behavior.\n\n**Dependency chain**\n- Depends on the NetSuite environment’s available record types and their configurations.\n- Influences the RESTlet’s data validation, processing logic, and response formatting.\n-"},"searchId":{"type":"string","description":"searchId: The unique identifier for a saved search in NetSuite, used to specify which saved search the RESTlet should execute. This ID corresponds to the internal ID assigned to saved searches within the NetSuite system. It enables the RESTlet to run predefined queries and retrieve data based on the saved search’s criteria and configuration.\n\n**Field behavior**\n- Specifies the exact saved search to be executed by the RESTlet.\n- Must correspond to a valid and existing saved search internal ID within the NetSuite account.\n- Determines the dataset and filters applied when retrieving search results.\n- Typically required when invoking the RESTlet to perform search operations.\n- Influences the structure and content of the response based on the saved search definition.\n\n**Implementation guidance**\n- Verify that the searchId matches an existing saved search internal ID in the target NetSuite environment.\n- Validate the searchId format and existence before making the RESTlet call to prevent runtime errors.\n- Use the internal ID as a string or numeric value consistent with NetSuite’s conventions.\n- Implement error handling for scenarios where the searchId is invalid, missing, or inaccessible due to permission restrictions.\n- Ensure the integration role or user has appropriate permissions to access and execute the saved search.\n- Consider caching or documenting frequently used searchIds to improve maintainability.\n\n**Examples**\n- \"1234\" — a numeric internal ID representing a specific saved search.\n- \"5678\" — another valid saved search internal ID.\n- \"1001\" — an example of a saved search ID used to retrieve customer records.\n- \"2002\" — a saved search ID configured to return transaction data.\n\n**Important notes**\n- The searchId must be accessible by the user or integration role making the RESTlet call; otherwise, access will be denied.\n- Providing an incorrect or non-existent searchId will result in errors or empty search results.\n- Permissions and sharing settings on the saved search directly affect the data returned by the RESTlet.\n- The saved search must be properly configured with the desired filters, columns, and criteria to ensure meaningful results.\n- Changes to the saved search (e.g., modifying filters or columns) will impact the RESTlet output without changing the searchId.\n\n**Dependency chain**\n- Depends on the existence of a saved search configured in the NetSuite account.\n- Requires appropriate user or integration role permissions to access the saved search.\n- Relies on the saved search’s configuration (filters, columns, criteria) to"},"useSS2Restlets":{"type":"boolean","description":"useSS2Restlets: >\n  Specifies whether to use SuiteScript 2.0 RESTlets for API interactions instead of SuiteScript 1.0 RESTlets. This setting controls the version of RESTlets invoked during API communication with NetSuite, impacting compatibility, performance, and available features.\n  **Field behavior**\n  - Determines the RESTlet version used for all API interactions within the NetSuite integration.\n  - When set to `true`, the system exclusively uses SuiteScript 2.0 RESTlets.\n  - When set to `false` or omitted, SuiteScript 1.0 RESTlets are used by default.\n  - Influences the structure, capabilities, and response formats of API calls.\n  **Implementation guidance**\n  - Enable this flag to take advantage of SuiteScript 2.0’s improved modularity, asynchronous capabilities, and modern JavaScript syntax.\n  - Verify that SuiteScript 2.0 RESTlets are properly deployed, configured, and accessible in the target NetSuite environment before enabling.\n  - Conduct comprehensive testing to ensure existing integrations and workflows remain functional when switching from SuiteScript 1.0 to 2.0 RESTlets.\n  - Coordinate with NetSuite administrators and developers to update or rewrite RESTlets if necessary.\n  **Examples**\n  - `true` — API calls will utilize SuiteScript 2.0 RESTlets, enabling modern scripting features.\n  - `false` — API calls will continue using legacy SuiteScript 1.0 RESTlets for backward compatibility.\n  **Important notes**\n  - SuiteScript 2.0 RESTlets support modular script architecture and ES6+ JavaScript features, improving maintainability and performance.\n  - Legacy RESTlets written in SuiteScript 1.0 may not be compatible with SuiteScript 2.0; migration or parallel support might be required.\n  - Switching RESTlet versions can change API response formats and behaviors, potentially impacting downstream systems.\n  - Ensure proper version control and rollback plans are in place when changing this setting.\n  **Dependency chain**\n  - Depends on the deployment and availability of SuiteScript 2.0 RESTlets within the NetSuite account.\n  - Requires that the API client and integration logic support the RESTlet version selected.\n  - May depend on other configuration settings related to authentication and script permissions.\n  **Technical details**\n  - SuiteScript 2.0 RESTlets use the AMD module format and support"},"restletVersion":{"type":"object","properties":{"type":{"type":"string","description":"The type property specifies the version type of the NetSuite Restlet being used. It determines the specific version or variant of the Restlet API that the integration will interact with, ensuring compatibility and correct functionality. This property is essential for correctly routing requests, handling responses, and maintaining alignment with the expected API contract for the chosen Restlet version.\n\n**Field behavior**\n- Defines the version category or variant of the NetSuite Restlet API.\n- Influences request formatting, response parsing, and available features.\n- Determines which API endpoints and methods are accessible.\n- May impact authentication mechanisms and data serialization formats.\n- Ensures that the integration communicates with the correct Restlet version to prevent incompatibility issues.\n\n**Implementation guidance**\n- Use only predefined and officially supported Restlet version types provided by NetSuite.\n- Validate the type value against the current list of supported Restlet versions before deployment.\n- Update the type property when upgrading to a newer Restlet version or switching to a different variant.\n- Include the type property within the restletVersion object to explicitly specify the API version.\n- Coordinate changes to this property with client applications and integration workflows to maintain compatibility.\n- Monitor NetSuite release notes and documentation for any changes or deprecations related to Restlet versions.\n\n**Examples**\n- \"1.0\" — specifying the stable Restlet API version 1.0.\n- \"2.0\" — specifying the newer Restlet API version 2.0 with enhanced features.\n- \"beta\" — indicating a beta or experimental Restlet version for testing purposes.\n- \"custom\" — representing a custom or extended Restlet version tailored for specific use cases.\n\n**Important notes**\n- Providing an incorrect or unsupported type value can cause API calls to fail or behave unpredictably.\n- The type must be consistent with the NetSuite environment configuration and deployment settings.\n- Changing the type may necessitate updates in client-side code, authentication flows, and data handling logic.\n- Always consult the latest official NetSuite documentation to verify supported Restlet versions and their characteristics.\n- The type property is critical for maintaining long-term integration stability and compatibility.\n\n**Dependency chain**\n- Depends on the restlet object to define the overall Restlet configuration.\n- Influences other properties related to authentication, endpoint URLs, and data formats within the restletVersion context.\n- May affect downstream processing components that rely on version-specific behaviors.\n\n**Technical details**\n- Typically represented as a string value"},"enum":{"type":"array","items":{"type":"object"},"description":"A list of predefined string values that the `restletVersion` property can accept, representing the supported versions of the NetSuite RESTlet API. This enumeration restricts the input to specific allowed versions to ensure compatibility, prevent invalid version assignments, and facilitate validation and user interface enhancements.\n\n**Field behavior**\n- Defines the complete set of valid version identifiers for the `restletVersion` property.\n- Ensures that only officially supported RESTlet API versions can be selected or submitted.\n- Enables validation mechanisms to reject unsupported or malformed version inputs.\n- Supports auto-completion and dropdown selections in user interfaces and API clients.\n- Helps maintain consistency and compatibility across different API integrations and deployments.\n\n**Implementation guidance**\n- Populate the enum with all currently supported NetSuite RESTlet API versions as defined by official documentation.\n- Regularly update the enum values to reflect newly released versions or deprecated ones.\n- Use clear and consistent string formats that match the official versioning scheme (e.g., semantic versions like \"1.0\", \"2.0\" or date-based versions like \"2023.1\").\n- Implement strict validation logic to reject any input not included in the enum.\n- Consider backward compatibility when adding or removing enum values to avoid breaking existing integrations.\n\n**Examples**\n- [\"1.0\", \"2.0\", \"2.1\"]\n- [\"2023.1\", \"2023.2\", \"2024.1\"]\n- [\"v1\", \"v2\", \"v3\"] (if applicable based on versioning scheme)\n\n**Important notes**\n- Enum values must strictly align with the official NetSuite RESTlet API versioning scheme to ensure correctness.\n- Using a version value outside this enum should trigger a validation error and prevent API calls or configuration saves.\n- The enum acts as a safeguard against runtime errors caused by unsupported or invalid version usage.\n- Changes to this enum should be communicated clearly to all API consumers to manage version compatibility.\n\n**Dependency chain**\n- This enum is directly associated with and constrains the `restletVersion` property.\n- Updates to supported RESTlet API versions necessitate corresponding updates to this enum.\n- Validation logic and UI components rely on this enum to enforce version correctness.\n- Downstream processes that depend on the `restletVersion` value are indirectly dependent on this enum’s accuracy.\n\n**Technical details**\n- Implemented as a string enumeration type within the API schema.\n- Used by validation middleware or schema"},"lowercase":{"type":"object","description":"lowercase: Specifies whether the restlet version string should be converted to lowercase characters to ensure consistent formatting.\n\n**Field behavior**\n- Determines if the restlet version identifier is transformed entirely to lowercase characters.\n- When set to true, the version string is converted to lowercase before any further processing or output.\n- When set to false or omitted, the version string retains its original casing as provided.\n- Affects only the textual representation of the version string, not its underlying value or meaning.\n\n**Implementation guidance**\n- Use this property to enforce uniform casing for version strings, particularly when interacting with case-sensitive systems or APIs.\n- Validate that the value assigned is a boolean (true or false).\n- Define a default behavior (commonly false) when the property is not explicitly set.\n- Apply the lowercase transformation early in the processing pipeline to maintain consistency.\n- Ensure that downstream components respect the transformed casing if this property is enabled.\n\n**Examples**\n- `true` — The version string \"V1.0\" is converted and output as \"v1.0\".\n- `false` — The version string \"V1.0\" remains unchanged as \"V1.0\".\n- Property omitted — The version string casing remains as originally provided.\n\n**Important notes**\n- Altering the casing of the version string may impact integrations with external systems that are case-sensitive.\n- Confirm compatibility with all consumers of the version string before enabling this property.\n- This property does not modify the semantic meaning or version number, only its textual case.\n- Should be used consistently across all instances where the version string is handled to avoid discrepancies.\n\n**Dependency chain**\n- Requires a valid restlet version string to perform the lowercase transformation.\n- May be used in conjunction with other formatting or validation properties related to the restlet version.\n- The effect of this property should be considered when performing version comparisons or logging.\n\n**Technical details**\n- Implemented as a boolean flag controlling whether to apply a lowercase function to the version string.\n- The transformation typically involves invoking a standard string lowercase method/function.\n- Should be executed before any version string comparisons, storage, or output operations.\n- Does not affect the internal representation of the version beyond its string casing."}},"description":"restletVersion specifies the version of the NetSuite Restlet script to be used for the API call. This property determines which version of the Restlet script is invoked, ensuring compatibility and proper execution of the request. It allows precise control over which iteration of the Restlet logic is executed, facilitating version management and smooth transitions between script updates.\n\n**Field behavior**\n- Defines the specific version of the Restlet script to target for the API request.\n- Influences the behavior, output, and compatibility of the API response based on the selected script version.\n- Enables management of multiple Restlet script versions within the same NetSuite environment.\n- Ensures that the API call executes the intended logic corresponding to the specified version.\n- Helps prevent conflicts or errors arising from script changes or updates.\n\n**Implementation guidance**\n- Set this property to exactly match the version identifier of the deployed Restlet script in NetSuite.\n- Confirm that the specified version is properly deployed and active in the NetSuite account before use.\n- Adopt a clear and consistent versioning scheme (e.g., semantic versioning, date-based, or custom tags) to avoid ambiguity.\n- Update this property whenever switching to a newer or different Restlet script version to reflect the intended logic.\n- Validate the version string format to prevent malformed or unsupported values.\n- Coordinate version updates with deployment and testing processes to ensure smooth transitions.\n\n**Examples**\n- \"1.0\"\n- \"2.1\"\n- \"2023.1\"\n- \"v3\"\n- \"release-2024-06\"\n\n**Important notes**\n- Specifying an incorrect or non-existent version will cause the API call to fail or produce unexpected results.\n- Proper versioning supports backward compatibility and controlled feature rollouts.\n- Always verify the Restlet script version in the NetSuite environment before making API calls.\n- Version mismatches can lead to errors, data inconsistencies, or unsupported operations.\n- This property is critical for environments where multiple Restlet versions coexist.\n\n**Dependency chain**\n- Depends on the Restlet scripts deployed and versioned within the NetSuite environment.\n- Related to the authentication and authorization context of the API call, as permissions may vary by script version.\n- Works in conjunction with other NetSuite API properties such as scriptId and deploymentId to fully identify the target Restlet.\n- May interact with environment-specific configurations or feature flags tied to particular versions.\n\n**Technical details**\n- Typically represented as a string"},"criteria":{"type":"object","properties":{"type":{"type":"array","items":{"type":"object"},"description":"The type property specifies the category or classification of the criteria used within the NetSuite RESTlet API. It defines the nature or kind of the criteria being applied to filter or query data, enabling precise targeting of records based on their domain or entity type.\n\n**Field behavior**\n- Determines the specific category or classification of the criteria.\n- Influences how the criteria is interpreted and processed by the API.\n- Helps in filtering or querying data based on the defined type.\n- Typically expects a predefined set of values corresponding to valid criteria types.\n- Acts as a key discriminator that guides the API in applying the correct schema and validation rules for the criteria.\n- May affect the available fields and operators applicable within the criteria.\n\n**Implementation guidance**\n- Validate the input against the allowed set of criteria types to ensure correctness and prevent errors.\n- Use consistent, descriptive, and case-sensitive naming conventions for the type values as defined by NetSuite.\n- Ensure that the specified type aligns with the corresponding criteria structure and expected data fields.\n- Document all possible values and their meanings clearly for API consumers to facilitate correct usage.\n- Implement error handling to provide meaningful feedback when unsupported or invalid types are supplied.\n- Keep the list of valid types updated in accordance with changes in NetSuite API versions and account configurations.\n\n**Examples**\n- \"customer\" — to specify criteria related to customer records.\n- \"transaction\" — to filter based on transaction data such as sales orders or invoices.\n- \"item\" — to apply criteria on inventory or product items.\n- \"employee\" — to target employee-related data.\n- \"vendor\" — to filter vendor or supplier records.\n- \"customrecord_xyz\" — to specify criteria for a custom record type identified by its script ID.\n\n**Important notes**\n- The type value directly affects the behavior of the criteria and the resulting data set.\n- Incorrect or unsupported type values may lead to API errors, empty results, or unexpected behavior.\n- The set of valid types may vary depending on the NetSuite account configuration, customizations, and API version.\n- Always refer to the latest official NetSuite API documentation and your account’s schema for supported types.\n- Some types may require additional permissions or roles to access the corresponding data.\n- The type property is mandatory for criteria filtering to function correctly.\n\n**Dependency chain**\n- Depends on the overall criteria object structure within NetSuite.restlet.criteria.\n- Influences the available fields, operators, and values within the criteria definition"},"join":{"type":"string","description":"join: Specifies the criteria used to join related records or tables in a NetSuite RESTlet query, enabling the retrieval of data based on relationships between different record types.\n**Field behavior**\n- Defines the relationship or link between the primary record and a related record for filtering or data retrieval.\n- Determines how records from different tables are combined based on matching fields.\n- Supports nested joins to allow complex queries involving multiple related records.\n**Implementation guidance**\n- Use valid join names as defined in NetSuite’s schema or documentation for the specific record types.\n- Ensure the join criteria align with the intended relationship to avoid incorrect or empty query results.\n- Combine with appropriate filters on the joined records to refine query results.\n- Validate join paths to prevent errors during query execution.\n**Examples**\n- Joining a customer record to its related sales orders using \"salesOrder\" as the join.\n- Using \"item\" join to filter transactions based on item attributes.\n- Nested join example: joining from a sales order to its customer and then to the customer’s address.\n**Important notes**\n- Incorrect join names or paths can cause query failures or unexpected results.\n- Joins may impact query performance; use them judiciously.\n- Not all record types support all possible joins; consult NetSuite documentation.\n- Joins are case-sensitive and must match NetSuite’s API specifications exactly.\n**Dependency chain**\n- Depends on the base record type specified in the query.\n- Works in conjunction with filter criteria to refine results.\n- May depend on authentication and permissions to access related records.\n**Technical details**\n- Typically represented as a string or object indicating the join path.\n- Used within the criteria object of a RESTlet query payload.\n- Supports multiple levels of nesting for complex joins.\n- Must conform to NetSuite’s SuiteScript or RESTlet API join syntax and conventions."},"operator":{"type":"string","description":"operator: >\n  Specifies the comparison operator used to evaluate the criteria in a NetSuite RESTlet request.\n  This operator determines how the field value is compared against the specified criteria value(s) to filter or query records.\n  **Field behavior**\n  - Defines the type of comparison between a field and a value (e.g., equality, inequality, greater than).\n  - Influences the logic of the criteria evaluation in RESTlet queries.\n  - Supports various operators such as equals, not equals, greater than, less than, contains, etc.\n  **Implementation guidance**\n  - Use valid NetSuite-supported operators to ensure correct query behavior.\n  - Match the operator type with the data type of the field being compared (e.g., use numeric operators for numeric fields).\n  - Combine multiple criteria with appropriate logical operators if needed.\n  - Validate operator values to prevent errors in RESTlet execution.\n  **Examples**\n  - \"operator\": \"is\" (checks if the field value is equal to the specified value)\n  - \"operator\": \"isnot\" (checks if the field value is not equal to the specified value)\n  - \"operator\": \"greaterthan\" (checks if the field value is greater than the specified value)\n  - \"operator\": \"contains\" (checks if the field value contains the specified substring)\n  **Important notes**\n  - The operator must be compatible with the field type and the value provided.\n  - Incorrect operator usage can lead to unexpected query results or errors.\n  - Operators are case-sensitive and should match NetSuite's expected operator strings.\n  **Dependency chain**\n  - Depends on the field specified in the criteria to determine valid operators.\n  - Works in conjunction with the criteria value(s) to form a complete condition.\n  - May be combined with logical operators when multiple criteria are used.\n  **Technical details**\n  - Typically represented as a string value in the RESTlet criteria JSON object.\n  - Supported operators align with NetSuite's SuiteScript search operators.\n  - Must conform to the list of operators recognized by the NetSuite RESTlet API."},"searchValue":{"type":"object","description":"searchValue: The value used as the search criterion to filter results in the NetSuite RESTlet API. This value is matched against the specified search field to retrieve relevant records based on the search parameters provided.\n**Field behavior**\n- Acts as the primary input for filtering search results.\n- Supports various data types depending on the search field (e.g., string, number, date).\n- Used in conjunction with other search criteria to refine query results.\n- Can be a partial or full match depending on the search configuration.\n**Implementation guidance**\n- Ensure the value type matches the expected type of the search field.\n- Validate the input to prevent injection attacks or malformed queries.\n- Use appropriate encoding if the value contains special characters.\n- Combine with logical operators or additional criteria for complex searches.\n**Examples**\n- \"Acme Corporation\" for searching customer names.\n- 1001 for searching by internal record ID.\n- \"2024-01-01\" for searching records created on or after a specific date.\n- \"Pending\" for filtering records by status.\n**Important notes**\n- The effectiveness of the search depends on the accuracy and format of the searchValue.\n- Case sensitivity may vary based on the underlying NetSuite configuration.\n- Large or complex search values may impact performance.\n- Null or empty values may result in no filtering or return all records.\n**Dependency chain**\n- Depends on the searchField property to determine which field the searchValue applies to.\n- Works alongside searchOperator to define how the searchValue is compared.\n- Influences the results returned by the RESTlet endpoint.\n**Technical details**\n- Typically passed as a string in the API request payload.\n- May require serialization or formatting based on the API specification.\n- Integrated into the NetSuite search query logic on the server side.\n- Subject to NetSuite’s search limitations and indexing capabilities."},"searchValue2":{"type":"object","description":"searchValue2 is an optional property used to specify the second value in a search criterion within the NetSuite RESTlet API. It is typically used in conjunction with search operators that require two values, such as \"between\" or \"not between,\" to define a range or a pair of comparison values.\n\n**Field behavior**\n- Represents the second operand or value in a search condition.\n- Used primarily with operators that require two values (e.g., \"between\", \"not between\").\n- Optional field; may be omitted if the operator only requires a single value.\n- Works alongside searchValue (the first value) to form a complete search criterion.\n\n**Implementation guidance**\n- Ensure that searchValue2 is provided only when the selected operator requires two values.\n- Validate the data type of searchValue2 to match the expected type for the field being searched (e.g., date, number, string).\n- When using range-based operators, searchValue2 should represent the upper bound or second boundary of the range.\n- If the operator does not require a second value, omit this property to avoid errors.\n\n**Examples**\n- For a date range search: searchValue = \"2023-01-01\", searchValue2 = \"2023-12-31\" with operator \"between\".\n- For a numeric range: searchValue = 100, searchValue2 = 200 with operator \"between\".\n- For a \"not between\" operator: searchValue = 50, searchValue2 = 100.\n\n**Important notes**\n- Providing searchValue2 without a compatible operator may result in an invalid search query.\n- The data type and format of searchValue2 must be consistent with searchValue and the field being queried.\n- This property is ignored if the operator only requires a single value.\n- Proper validation and error handling should be implemented when processing this field.\n\n**Dependency chain**\n- Dependent on the \"operator\" property within the same search criterion.\n- Works in conjunction with \"searchValue\" to define the search condition.\n- Part of the \"criteria\" array or object in the NetSuite RESTlet search request.\n\n**Technical details**\n- Data type varies depending on the field being searched (string, number, date, etc.).\n- Typically serialized as a JSON property in the RESTlet request payload.\n- Must conform to the expected format for the field and operator to avoid API errors.\n- Used internally by NetSuite to construct the appropriate"},"formula":{"type":"string","description":"formula: >\n  A string representing a custom formula used to define criteria for filtering or querying data within the NetSuite RESTlet API.\n  This formula allows users to specify complex conditions using NetSuite's formula syntax, enabling advanced and flexible data retrieval.\n  **Field behavior**\n  - Accepts a formula expression as a string that defines custom filtering logic.\n  - Used to create dynamic and complex criteria beyond standard field-value comparisons.\n  - Evaluated by the NetSuite backend to filter records according to the specified logic.\n  - Can incorporate NetSuite formula functions, operators, and field references.\n  **Implementation guidance**\n  - Ensure the formula syntax complies with NetSuite's formula language and supported functions.\n  - Validate the formula string before submission to avoid runtime errors.\n  - Use this field when standard criteria fields are insufficient for the required filtering.\n  - Combine with other criteria fields as needed to build comprehensive queries.\n  **Examples**\n  - \"CASE WHEN {status} = 'Open' THEN 1 ELSE 0 END = 1\"\n  - \"TO_DATE({createddate}) >= TO_DATE('2023-01-01')\"\n  - \"NVL({amount}, 0) > 1000\"\n  **Important notes**\n  - Incorrect or invalid formulas may cause the API request to fail or return errors.\n  - The formula must be compatible with the context of the query and the fields available.\n  - Performance may be impacted if complex formulas are used extensively.\n  - Formula evaluation is subject to NetSuite's formula engine capabilities and limitations.\n  **Dependency chain**\n  - Depends on the availability of fields referenced within the formula.\n  - Works in conjunction with other criteria properties in the request.\n  - Requires understanding of NetSuite's formula syntax and functions.\n  **Technical details**\n  - Data type: string.\n  - Supports NetSuite formula syntax including SQL-like expressions and functions.\n  - Evaluated server-side during the processing of the RESTlet request.\n  - Must be URL-encoded if included in query parameters of HTTP requests."},"_id":{"type":"object","description":"Unique identifier for the record within the NetSuite system.\n**Field behavior**\n- Serves as the primary key to uniquely identify a specific record.\n- Immutable once the record is created.\n- Used to retrieve, update, or delete the corresponding record.\n**Implementation guidance**\n- Must be a valid NetSuite internal ID format, typically a string or numeric value.\n- Should be provided when querying or manipulating a specific record.\n- Avoid altering this value to maintain data integrity.\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"abcde12345\"\n**Important notes**\n- This ID is assigned by NetSuite and should not be generated manually.\n- Ensure the ID corresponds to an existing record to avoid errors.\n- When used in criteria, it filters the dataset to the exact record matching this ID.\n**Dependency chain**\n- Dependent on the existence of the record in NetSuite.\n- Often used in conjunction with other criteria fields for precise querying.\n**Technical details**\n- Typically represented as a string or integer data type.\n- Used in RESTlet scripts as part of the criteria object to specify the target record.\n- May be included in URL parameters or request bodies depending on the API design."}},"description":"criteria: >\n  Defines the set of conditions or filters used to specify which records should be retrieved or affected by the NetSuite RESTlet operation. This property enables clients to precisely narrow down the dataset by applying one or more criteria based on record fields, comparison operators, and values, supporting complex logical combinations to tailor the query results.\n\n  **Field behavior**\n  - Accepts a structured object or an array representing one or multiple filtering conditions.\n  - Supports logical operators such as AND, OR, and nested groupings to combine multiple criteria flexibly.\n  - Each criterion typically includes a field name, an operator (e.g., equals, contains, greaterThan), and a value or set of values.\n  - Enables filtering on various data types including strings, numbers, dates, and booleans.\n  - Used to limit the scope of data returned or manipulated by the RESTlet to only those records that meet the specified conditions.\n  - When omitted or empty, the RESTlet may return all records or apply default filtering behavior as defined by the implementation.\n\n  **Implementation guidance**\n  - Validate the criteria structure rigorously to ensure it conforms to the expected schema before processing.\n  - Support nested criteria groups to allow complex and hierarchical filtering logic.\n  - Map criteria fields and operators accurately to corresponding NetSuite record fields and search operators, considering data types and operator compatibility.\n  - Handle empty or undefined criteria gracefully by returning all records or applying sensible default filters.\n  - Sanitize all input values to prevent injection attacks, malformed queries, or unexpected behavior.\n  - Provide clear error messages when criteria are invalid or unsupported.\n  - Optimize query performance by translating criteria into efficient NetSuite search queries.\n\n  **Examples**\n  - A single criterion filtering records where status equals \"Open\":\n    `{ \"field\": \"status\", \"operator\": \"equals\", \"value\": \"Open\" }`\n  - Multiple criteria combined with AND logic:\n    `[{\"field\": \"status\", \"operator\": \"equals\", \"value\": \"Open\"}, {\"field\": \"priority\", \"operator\": \"greaterThan\", \"value\": 2}]`\n  - Nested criteria combining OR and AND:\n    `{ \"operator\": \"OR\", \"criteria\": [ {\"field\": \"status\", \"operator\": \"equals\", \"value\": \"Open\"}, { \"operator\": \"AND\", \"criteria\": [ {\"field\": \"priority\", \"operator\": \"greaterThan\", \"value\":"},"columns":{"type":"object","properties":{"type":{"type":"array","items":{"type":"object"},"description":"The type property specifies the data type of the column in the NetSuite RESTlet response. It defines how the data in the column should be interpreted, validated, and handled by the client application to ensure accurate processing and display.\n\n**Field behavior**\n- Indicates the specific data type of the column (e.g., string, integer, date).\n- Determines the format, validation rules, and parsing logic applied to the column data.\n- Guides client applications in correctly interpreting and processing the data.\n- Influences data presentation, transformation, and serialization in the user interface or downstream systems.\n- Helps enforce data consistency and integrity across different components consuming the API.\n\n**Implementation guidance**\n- Use standardized data type names consistent with NetSuite’s native data types and conventions.\n- Ensure the specified type accurately reflects the actual data returned in the column to prevent parsing or runtime errors.\n- Support and validate common data types such as text (string), number (integer, float), date, datetime, boolean, and currency.\n- Validate the type value against a predefined, documented list of acceptable types to maintain consistency.\n- Clearly document any custom or extended data types if they are introduced beyond standard NetSuite types.\n- Consider locale and formatting standards (e.g., ISO 8601 for dates) when defining and interpreting types.\n\n**Examples**\n- \"string\" — for textual or alphanumeric data.\n- \"integer\" — for whole numeric values without decimals.\n- \"float\" — for numeric values with decimals (if supported).\n- \"date\" — for date values without time components, formatted as YYYY-MM-DD.\n- \"datetime\" — for combined date and time values, typically in ISO 8601 format.\n- \"boolean\" — for true/false or yes/no values.\n- \"currency\" — for monetary values, often including currency symbols or codes.\n\n**Important notes**\n- The type property is essential for ensuring data integrity and enabling correct client-side processing and validation.\n- Incorrect or mismatched type specifications can lead to data misinterpretation, parsing failures, or runtime errors.\n- Some data types require strict formatting standards (e.g., ISO 8601 for date and datetime) to ensure interoperability.\n- This property is typically mandatory for each column to guarantee predictable behavior.\n- Changes to the type property should be managed carefully to avoid breaking existing integrations.\n\n**Dependency chain**\n- Depends on the actual data returned by the NetSuite RESTlet for the column.\n- Influences"},"join":{"type":"string","description":"join: Specifies the join relationship to be used when retrieving or manipulating data through the NetSuite RESTlet API. This property defines how related records are linked together, enabling the inclusion of fields from associated records in the query or operation.\n\n**Field behavior**\n- Determines the type of join between the primary record and related records.\n- Enables access to fields from related records by specifying the join path.\n- Influences the scope and depth of data retrieved or affected by the API call.\n- Supports nested joins to traverse multiple levels of related records.\n\n**Implementation guidance**\n- Use valid join names as defined in the NetSuite schema for the specific record type.\n- Ensure the join relationship exists and is supported by the RESTlet endpoint.\n- Combine with column definitions to specify which fields from the joined records to include.\n- Validate join paths to prevent errors or unexpected results in data retrieval.\n- Consider performance implications when using multiple or complex joins.\n\n**Examples**\n- \"customer\" to join the transaction record with the related customer record.\n- \"item\" to join a sales order with the associated item records.\n- \"employee.manager\" to join an employee record with their manager's record.\n- \"vendor\" to join a purchase order with the vendor record.\n\n**Important notes**\n- Incorrect or unsupported join names will result in API errors.\n- Joins are case-sensitive and must match the exact join names defined in NetSuite.\n- Not all record types support all join relationships.\n- The join property works in conjunction with the columns property to specify which fields to retrieve.\n- Using joins may increase the complexity and execution time of the API call.\n\n**Dependency chain**\n- Depends on the base record type being queried or manipulated.\n- Works together with the columns property to define the data structure.\n- May depend on user permissions to access related records.\n- Influences the structure of the response payload.\n\n**Technical details**\n- Represented as a string indicating the join path.\n- Supports dot notation for nested joins (e.g., \"employee.manager\").\n- Used in RESTlet scripts to customize data retrieval.\n- Must conform to NetSuite's internal join naming conventions.\n- Typically included in the columns array objects to specify joined fields."},"summary":{"type":"object","properties":{"type":{"type":"string","description":"Type of the summary column, indicating the aggregation or calculation applied to the data in this column.\n**Field behavior**\n- Specifies the kind of summary operation performed on the column data, such as sum, count, average, minimum, or maximum.\n- Determines how the data in the column is aggregated or summarized in the report or query result.\n- Influences the output format and the meaning of the values in the summary column.\n**Implementation guidance**\n- Use predefined summary types supported by the NetSuite RESTlet API to ensure compatibility.\n- Validate the type value against allowed summary operations to prevent errors.\n- Ensure that the summary type is appropriate for the data type of the column (e.g., sum for numeric fields).\n- Document the summary type clearly to aid users in understanding the aggregation applied.\n**Examples**\n- \"SUM\" — calculates the total sum of the values in the column.\n- \"COUNT\" — counts the number of entries or records.\n- \"AVG\" — computes the average value.\n- \"MIN\" — finds the minimum value.\n- \"MAX\" — finds the maximum value.\n**Important notes**\n- The summary type must be supported by the underlying NetSuite system to function correctly.\n- Incorrect summary types may lead to errors or misleading data in reports.\n- Some summary types may not be applicable to certain data types (e.g., average on text fields).\n**Dependency chain**\n- Depends on the column data type to determine valid summary types.\n- Interacts with the overall report or query configuration to produce summarized results.\n- May affect downstream processing or display logic based on the summary output.\n**Technical details**\n- Typically represented as a string value corresponding to the summary operation.\n- Mapped internally to NetSuite’s summary functions in saved searches or reports.\n- Case-insensitive but recommended to use uppercase for consistency.\n- Must conform to the enumeration of allowed summary types defined by the API."},"enum":{"type":"object","description":"enum: >\n  Specifies the set of predefined constant values that the property can take, representing an enumeration.\n  **Field behavior**\n  - Defines a fixed list of allowed values for the property.\n  - Restricts the property's value to one of the enumerated options.\n  - Used to enforce data integrity and consistency.\n  **Implementation guidance**\n  - Enumerated values should be clearly defined and documented.\n  - Use meaningful and descriptive names for each enum value.\n  - Ensure the enum list is exhaustive for the intended use case.\n  - Validate input against the enum values to prevent invalid data.\n  **Examples**\n  - [\"Pending\", \"Approved\", \"Rejected\"]\n  - [\"Small\", \"Medium\", \"Large\"]\n  - [\"Red\", \"Green\", \"Blue\"]\n  **Important notes**\n  - Enum values are case-sensitive unless otherwise specified.\n  - Adding or removing enum values may impact backward compatibility.\n  - Enum should be used when the set of possible values is known and fixed.\n  **Dependency chain**\n  - Typically used in conjunction with the property type (e.g., string or integer).\n  - May influence validation logic and UI dropdown options.\n  **Technical details**\n  - Represented as an array of strings or numbers defining allowed values.\n  - Often implemented as a constant or static list in code.\n  - Used by client and server-side validation mechanisms."},"lowercase":{"type":"object","description":"A boolean property indicating whether the summary column values should be converted to lowercase.\n\n**Field behavior**\n- When set to true, all text values in the summary column are transformed to lowercase.\n- When set to false or omitted, the original casing of the summary column values is preserved.\n- Primarily affects string-type summary columns; non-string values remain unaffected.\n\n**Implementation guidance**\n- Use this property to normalize text data for consistent processing or comparison.\n- Ensure that the transformation to lowercase does not interfere with case-sensitive data requirements.\n- Apply this setting during data retrieval or before output formatting in the RESTlet response.\n\n**Examples**\n- `lowercase: true` — converts \"Example Text\" to \"example text\".\n- `lowercase: false` — retains \"Example Text\" as is.\n- Property omitted — defaults to no case transformation.\n\n**Important notes**\n- This property only affects the summary columns specified in the RESTlet response.\n- It does not modify the underlying data in NetSuite, only the output representation.\n- Use with caution when case sensitivity is important for downstream processing.\n\n**Dependency chain**\n- Depends on the presence of summary columns in the RESTlet response.\n- May interact with other formatting or transformation properties applied to columns.\n\n**Technical details**\n- Implemented as a boolean flag within the summary column configuration.\n- Transformation is applied at the data serialization stage before sending the response.\n- Compatible with string data types; other data types bypass this transformation."}},"description":"A brief textual summary providing an overview or key points related to the item or record represented by this property. This summary is intended to give users a quick understanding without needing to delve into detailed data.\n\n**Field behavior**\n- Contains concise, human-readable text summarizing the main aspects or highlights of the associated record.\n- Typically used in list views, reports, or previews where a quick reference is needed.\n- May be displayed in user interfaces to provide context or a snapshot of the data.\n- Can be optional or mandatory depending on the specific API implementation.\n\n**Implementation guidance**\n- Ensure the summary is clear, concise, and informative, avoiding overly technical jargon unless appropriate.\n- Limit the length to a reasonable number of characters to maintain readability in UI components.\n- Sanitize input to prevent injection of malicious content if the summary is user-generated.\n- Support localization if the API serves multi-lingual environments.\n- Update the summary whenever the underlying data it represents changes to keep it accurate.\n\n**Examples**\n- \"Quarterly sales report showing a 15% increase in revenue.\"\n- \"Customer profile summary including recent orders and contact info.\"\n- \"Inventory item overview highlighting stock levels and reorder status.\"\n- \"Project status summary indicating milestones achieved and pending tasks.\"\n\n**Important notes**\n- The summary should not contain sensitive or confidential information unless properly secured.\n- It is distinct from detailed descriptions or full data fields; it serves as a high-level overview.\n- Consistency in style and format across summaries improves user experience.\n- May be truncated in some UI contexts; design summaries to convey essential information upfront.\n\n**Dependency chain**\n- Often derived from or related to other detailed fields within the record.\n- May influence or be influenced by display components or reporting modules consuming this property.\n- Could be linked to user permissions determining visibility of summary content.\n\n**Technical details**\n- Data type is typically a string.\n- May have a maximum length constraint depending on API or database schema.\n- Stored and transmitted as plain text; consider encoding if special characters are used.\n- Accessible via the RESTlet API under the path `netsuite.restlet.columns.summary`."},"formula":{"type":"string","description":"formula: >\n  A string representing a custom formula used to calculate or derive values dynamically within the context of the NetSuite RESTlet columns. This formula can include field references, operators, and functions supported by NetSuite's formula syntax to perform computations or conditional logic on record data.\n\n  **Field behavior:**\n  - Accepts a formula expression as a string that defines how to compute the column's value.\n  - Can reference other fields, constants, and use NetSuite-supported functions and operators.\n  - Evaluated at runtime to produce dynamic results based on the current record data.\n  - Used primarily in saved searches, reports, or RESTlet responses to customize output.\n  \n  **Implementation guidance:**\n  - Ensure the formula syntax complies with NetSuite's formula language and supported functions.\n  - Validate the formula string to prevent errors during execution.\n  - Use field IDs or aliases correctly within the formula to reference data fields.\n  - Test formulas thoroughly in NetSuite UI before deploying via RESTlet to ensure correctness.\n  - Consider performance implications of complex formulas on large datasets.\n  \n  **Examples:**\n  - \"CASE WHEN {status} = 'Open' THEN 1 ELSE 0 END\" — returns 1 if status is Open, else 0.\n  - \"NVL({amount}, 0) * 0.1\" — calculates 10% of the amount, treating null as zero.\n  - \"TO_CHAR({trandate}, 'YYYY-MM-DD')\" — formats the transaction date as a string.\n  \n  **Important notes:**\n  - The formula must be compatible with the context in which it is used (e.g., search column, RESTlet).\n  - Incorrect formulas can cause runtime errors or unexpected results.\n  - Some functions or operators may not be supported depending on the NetSuite version or API context.\n  - Formula evaluation respects user permissions and data visibility.\n  \n  **Dependency chain:**\n  - Depends on the availability of referenced fields within the record or search context.\n  - Relies on NetSuite's formula parsing and evaluation engine.\n  - Interacts with the RESTlet execution environment to produce output.\n  \n  **Technical details:**\n  - Data type: string containing a formula expression.\n  - Supports NetSuite formula syntax including SQL-like CASE statements, arithmetic operations, and built-in functions.\n  - Evaluated server-side during RESTlet execution or saved search processing.\n  - Must be URL-"},"label":{"type":"string","description":"label: |\n  The display name or title of the column as it appears in the user interface or reports.\n  **Field behavior**\n  - Represents the human-readable name for a column in a dataset or report.\n  - Used to identify the column in UI elements such as tables, forms, or export files.\n  - Should be concise yet descriptive enough to convey the column’s content.\n  **Implementation guidance**\n  - Ensure the label is localized if the application supports multiple languages.\n  - Avoid using technical jargon; prefer user-friendly terminology.\n  - Keep the label length reasonable to prevent UI truncation.\n  - Update the label consistently when the underlying data or purpose changes.\n  **Examples**\n  - \"Customer Name\"\n  - \"Invoice Date\"\n  - \"Total Amount\"\n  - \"Status\"\n  **Important notes**\n  - The label does not affect the data or the column’s functionality; it is purely for display.\n  - Changing the label does not impact data processing or storage.\n  - Labels should be unique within the same context to avoid confusion.\n  **Dependency chain**\n  - Depends on the column definition within the dataset or report configuration.\n  - May be linked to localization resources if internationalization is supported.\n  **Technical details**\n  - Typically a string data type.\n  - May support Unicode characters for internationalization.\n  - Stored as metadata associated with the column definition in the system."},"sort":{"type":"boolean","description":"sort: >\n  Specifies the sorting order for the column values in the query results.\n  **Field behavior**\n  - Determines the order in which the data is returned based on the column values.\n  - Accepts values that indicate ascending or descending order.\n  - Influences how the dataset is organized before being returned by the API.\n  **Implementation guidance**\n  - Use standardized values such as \"asc\" for ascending and \"desc\" for descending.\n  - Ensure the sort parameter corresponds to a valid column in the dataset.\n  - Multiple sort parameters may be supported to define secondary sorting criteria.\n  - Validate the sort value to prevent errors or unexpected behavior.\n  **Examples**\n  - \"asc\" to sort the column values in ascending order.\n  - \"desc\" to sort the column values in descending order.\n  **Important notes**\n  - Sorting can impact performance, especially on large datasets.\n  - If no sort parameter is provided, the default sorting behavior of the API applies.\n  - Sorting is case-insensitive.\n  **Dependency chain**\n  - Depends on the column specified in the query or request.\n  - May interact with pagination parameters to determine the final data output.\n  **Technical details**\n  - Typically implemented as a string value in the API request.\n  - May be part of a query string or request body depending on the API design.\n  - Sorting logic is handled server-side before data is returned to the client."}},"description":"columns: >\n  Specifies the set of columns (fields) to be retrieved or manipulated in the NetSuite RESTlet operation. This property defines which specific fields from the records should be included in the response or used during processing, enabling precise control over the data returned or affected. By selecting only relevant columns, it helps optimize performance and reduce payload size, ensuring efficient data handling tailored to the operation’s requirements.\n\n  **Field behavior**\n  - Determines the exact fields (columns) to be included in data retrieval, update, or manipulation operations.\n  - Supports specifying multiple columns to customize the dataset returned or processed.\n  - Limits the data payload by including only the specified columns, improving performance and reducing bandwidth.\n  - Influences the structure, content, and size of the response from the RESTlet.\n  - If omitted, defaults to retrieving all available columns for the target record type, which may impact performance.\n  - Columns specified must be valid and accessible for the target record type to avoid errors.\n\n  **Implementation guidance**\n  - Accepts an array or list of column identifiers, which can be simple strings or objects with detailed specifications (e.g., `{ name: \"fieldname\" }`).\n  - Column identifiers should correspond exactly to valid NetSuite record field names or internal IDs.\n  - Validate column names against the target record schema before execution to prevent runtime errors.\n  - Use this property to optimize RESTlet calls by limiting data to only necessary fields, especially in large datasets.\n  - When specifying complex columns (e.g., joined fields or formula fields), ensure the correct syntax and structure are used.\n  - Consider the permissions and roles associated with the RESTlet user to ensure access to the specified columns.\n\n  **Examples**\n  - `[\"internalid\", \"entityid\", \"email\"]` — retrieves basic identifying and contact fields.\n  - `[ { name: \"internalid\" }, { name: \"entityid\" }, { name: \"email\" } ]` — object notation for specifying columns.\n  - `[\"tranid\", \"amount\", \"status\"]` — retrieves transaction-specific fields.\n  - `[ { name: \"custbody_custom_field\" }, { name: \"createddate\" } ]` — includes custom and system fields.\n  - `[\"item\", \"quantity\", \"rate\"]` — fields relevant to item records or line items.\n\n  **Important notes**\n  - Omitting the `columns` property typically"},"markExportedBatchSize":{"type":"object","properties":{"type":{"type":"number","description":"type: >\n  Specifies the data type of the `markExportedBatchSize` property, defining the kind of value it accepts or represents. This property is crucial for ensuring that the batch size value is correctly interpreted, validated, and processed by the API. It dictates how the value is serialized and deserialized during API communication, thereby maintaining data integrity and consistency across different system components.\n  **Field behavior**\n  - Determines the expected format and constraints of the `markExportedBatchSize` value.\n  - Influences validation rules applied to the batch size input to prevent invalid data.\n  - Guides serialization and deserialization processes for accurate data exchange.\n  - Ensures compatibility with client and server-side processing logic.\n  **Implementation guidance**\n  - Must be assigned a valid and recognized data type within the API schema, such as \"integer\" or \"string\".\n  - Should align precisely with the nature of the batch size value to avoid type mismatches.\n  - Implement strict validation checks to confirm the value conforms to the specified type before processing.\n  - Consider the implications of the chosen type on downstream processing and storage.\n  **Examples**\n  - `\"integer\"` — indicating the batch size is represented as a whole number.\n  - `\"string\"` — if the batch size is provided as a textual representation.\n  - `\"number\"` — for numeric values that may include decimals (less common for batch sizes).\n  **Important notes**\n  - The `type` must consistently reflect the actual data format of `markExportedBatchSize` to prevent runtime errors.\n  - Mismatched or incorrect type declarations can cause API failures, data corruption, or unexpected behavior.\n  - Changes to this property’s type should be carefully managed to maintain backward compatibility.\n  **Dependency chain**\n  - Directly defines the data handling of the `markExportedBatchSize` property.\n  - Affects validation logic and error handling in API endpoints related to batch processing.\n  - May impact client applications that consume or provide this property’s value.\n  **Technical details**\n  - Corresponds to standard JSON data types such as integer, string, boolean, etc.\n  - Utilized by the API framework to enforce type safety, ensuring data integrity during request and response cycles.\n  - Plays a role in schema validation tools and automated documentation generation.\n  - Influences serialization libraries in encoding and decoding the property value correctly."},"cLocked":{"type":"object","description":"cLocked indicates whether the batch size setting for marking exports is locked, preventing any modifications by users or automated processes. This property serves as a control mechanism to safeguard critical configuration parameters related to export batch processing.\n\n**Field behavior**\n- Represents a boolean flag that determines if the batch size configuration is immutable.\n- When set to true, the batch size cannot be altered via the user interface, API calls, or automated scripts.\n- When set to false, the batch size remains configurable and can be adjusted as operational needs evolve.\n- Changes to this flag directly affect the ability to update the batch size setting.\n\n**Implementation guidance**\n- Utilize this flag to enforce configuration stability and prevent accidental or unauthorized changes to batch size settings.\n- Validate this flag before processing any update requests to the batch size to ensure compliance.\n- Typically managed by system administrators or during initial system setup to lock down critical parameters.\n- Incorporate audit logging when this flag is changed to maintain traceability.\n- Consider integrating with role-based access controls to restrict who can toggle this flag.\n\n**Examples**\n- cLocked: true — The batch size setting is locked, disallowing any modifications.\n- cLocked: false — The batch size setting is unlocked and can be updated as needed.\n\n**Important notes**\n- Locking the batch size helps maintain consistent export throughput and prevents performance degradation caused by unintended configuration changes.\n- Modifications to this flag should be performed cautiously and ideally under change management procedures.\n- This property is only applicable in environments where batch size configuration for marking exports is relevant.\n- Ensure that dependent systems or processes respect this lock to avoid configuration conflicts.\n\n**Dependency chain**\n- Dependent on the presence of the markExportedBatchSize configuration object within the system.\n- Interacts with user permission settings and roles that govern configuration management capabilities.\n- May affect downstream export processing workflows that rely on batch size parameters.\n\n**Technical details**\n- Data type: Boolean.\n- Default value is false, indicating the batch size is unlocked unless explicitly locked.\n- Persisted as part of the NetSuite.restlet.markExportedBatchSize configuration object.\n- Changes to this property should trigger validation and possibly system notifications to administrators."},"min":{"type":"object","description":"Minimum number of records to process in a single batch during the markExported operation in the NetSuite RESTlet integration. This property sets the lower boundary for batch sizes, ensuring that each batch contains at least this number of records before processing begins. It plays a crucial role in balancing processing efficiency and system resource utilization by controlling the granularity of batch operations.\n\n**Field behavior**\n- Defines the lower limit for the batch size when processing records in the markExported operation.\n- Ensures that each batch contains at least this minimum number of records before processing.\n- Helps optimize performance by preventing excessively small batches that could increase overhead.\n- Works in conjunction with the 'max' batch size to establish a valid range for batch processing.\n- Influences how the system partitions large datasets into manageable chunks for processing.\n\n**Implementation guidance**\n- Choose a value based on system capabilities, expected record complexity, and API rate limits to avoid timeouts or throttling.\n- Ensure this value is a positive integer greater than zero.\n- Must be less than or equal to the corresponding 'max' batch size to maintain logical consistency.\n- Test different values to find an optimal balance between processing speed and resource consumption.\n- Consider the impact on downstream systems and network latency when setting this value.\n\n**Examples**\n- 10 (process at least 10 records per batch)\n- 50 (process at least 50 records per batch)\n- 100 (process at least 100 records per batch for high-throughput scenarios)\n\n**Important notes**\n- Setting this value too low may lead to inefficient processing due to increased overhead from handling many small batches.\n- Setting this value too high may cause processing delays, timeouts, or exceed API rate limits.\n- Always use in conjunction with the 'max' batch size to define a valid and effective batch size range.\n- Changes to this value should be tested in a staging environment before production deployment to assess impact.\n\n**Dependency chain**\n- Directly related to 'netsuite.restlet.markExportedBatchSize.max', which defines the upper limit of batch size.\n- Utilized within the batch processing logic of the markExported operation in the NetSuite RESTlet integration.\n- Influences and is influenced by system performance parameters and API constraints.\n\n**Technical details**\n- Data type: Integer\n- Must be a positive integer greater than zero\n- Should be validated at configuration time to ensure it does not exceed the 'max' batch size"},"max":{"type":"object","description":"Maximum number of records to process in a single batch during the markExported operation, defining the upper limit for batch size to balance performance and resource utilization effectively.\n\n**Field behavior**\n- Specifies the maximum count of records processed in one batch during the markExported operation.\n- Controls the batch size to optimize throughput while preventing system overload.\n- Helps manage memory usage and processing time by limiting batch volume.\n- Directly affects the frequency and size of API calls or processing cycles.\n\n**Implementation guidance**\n- Determine an optimal value based on system capacity, performance benchmarks, and typical workload.\n- Ensure the value complies with any API or platform-imposed batch size limits.\n- Consider network conditions, processing latency, and error handling when setting the batch size.\n- Validate that the input is a positive integer and handle invalid values gracefully.\n- Adjust dynamically if possible, based on runtime metrics or error feedback.\n\n**Examples**\n- 1000: Processes up to 1000 records per batch, suitable for balanced performance.\n- 500: Smaller batch size for environments with limited resources or higher reliability needs.\n- 2000: Larger batch size for high-throughput scenarios where system resources allow.\n- 50: Very small batch size for testing or debugging purposes.\n\n**Important notes**\n- Excessively high values may lead to timeouts, memory exhaustion, or degraded system responsiveness.\n- Very low values can increase overhead due to more frequent batch processing cycles.\n- This parameter is critical for tuning the performance and stability of the markExported operation.\n- Changes to this value should be tested in a controlled environment before production deployment.\n\n**Dependency chain**\n- Integral to the batch processing logic within the markExported operation.\n- Interacts with system-level batch size constraints and API rate limits.\n- Influences how records are chunked and iterated during export marking.\n- May affect downstream processing components that consume batch outputs.\n\n**Technical details**\n- Must be an integer value greater than zero.\n- Typically configured via API request parameters or system configuration files.\n- Should be compatible with the data processing framework and any middleware handling batch operations.\n- May require synchronization with other batch-related settings to ensure consistency."}},"description":"markExportedBatchSize: The number of records to process in each batch when marking records as exported in NetSuite via the RESTlet API. This setting controls how many records are updated in a single API call to optimize performance and resource usage during the export marking process.\n\n**Field behavior**\n- Determines the size of each batch of records to be marked as exported in NetSuite.\n- Controls the number of records processed per RESTlet API call for export status updates.\n- Directly impacts the throughput and efficiency of the export marking operation.\n- Influences the balance between processing speed and system resource consumption.\n- Helps manage API rate limits by controlling the volume of records processed per request.\n\n**Implementation guidance**\n- Select a batch size that balances efficient processing with system stability and API constraints.\n- Avoid excessively large batch sizes to prevent API timeouts, memory exhaustion, or throttling.\n- Consider the typical volume of records to be exported and the performance characteristics of your NetSuite environment.\n- Test various batch sizes under realistic load conditions to identify the optimal value.\n- Monitor API response times and error rates to adjust the batch size dynamically if needed.\n- Ensure compatibility with any rate limiting or concurrency restrictions imposed by the NetSuite RESTlet API.\n\n**Examples**\n- Setting `markExportedBatchSize` to 100 processes 100 records per batch, suitable for moderate workloads.\n- Using a batch size of 500 may be appropriate for high-volume exports on systems with robust resources.\n- A smaller batch size like 50 can help avoid API throttling or timeouts in environments with limited resources or strict rate limits.\n- Adjusting the batch size to 200 after observing API latency improvements the overall export marking throughput.\n\n**Important notes**\n- The batch size setting directly affects the speed, reliability, and resource utilization of marking records as exported.\n- Incorrect batch size configurations can cause partial updates, failed API calls, or increased processing times.\n- This property is specific to the RESTlet-based integration with NetSuite and does not apply to other export mechanisms.\n- Changes to this setting should be tested thoroughly to avoid unintended disruptions in the export workflow.\n- Consider the impact on downstream processes that depend on timely and accurate export status updates.\n\n**Dependency chain**\n- Depends on the RESTlet API endpoint responsible for marking records as exported in NetSuite.\n- Influenced by NetSuite API rate limits, timeout settings, and system performance characteristics.\n- Works in conjunction with other export configuration parameters"},"TODO":{"type":"object","description":"TODO: A placeholder property used to indicate tasks, features, or sections within the NetSuite RESTlet integration that require implementation, completion, or further development. This property functions as a clear marker for developers and project managers to identify areas that are pending work, ensuring that these tasks are tracked and addressed before finalizing the API. It is not intended to hold any functional data or be part of the production API contract until fully implemented.\n\n**Field behavior**\n- Serves as a temporary indicator for incomplete, pending, or planned tasks within the API schema.\n- Does not contain operational data or affect API functionality until properly defined and implemented.\n- Helps track development progress and highlight areas needing attention during the development lifecycle.\n- Should be removed or replaced with finalized implementations once the associated task is completed.\n- May be used to generate reports or dashboards reflecting outstanding development work.\n\n**Implementation guidance**\n- Utilize the TODO property to explicitly flag API sections requiring further coding, configuration, or review.\n- Accompany TODO entries with detailed comments or references to issue tracking systems (e.g., JIRA, GitHub Issues) for clarity and traceability.\n- Establish regular review cycles to update, resolve, or remove TODO properties to maintain an accurate representation of development status.\n- Avoid deploying TODO properties in production environments to prevent confusion, incomplete features, or potential runtime errors.\n- Integrate TODO tracking with project management workflows to ensure timely resolution.\n\n**Examples**\n- TODO: Implement OAuth 2.0 authentication mechanism for RESTlet endpoints.\n- TODO: Add comprehensive validation rules for input parameters to ensure data integrity.\n- TODO: Complete error handling and logging for data retrieval failures.\n- TODO: Optimize response payload size for improved performance.\n- TODO: Integrate unit tests covering all new RESTlet functionalities.\n\n**Important notes**\n- The presence of TODO properties signifies incomplete or provisional functionality and should not be interpreted as finalized API features.\n- Unresolved TODO items can lead to partial implementations, unexpected behavior, or runtime errors if not addressed before release.\n- Effective management and timely resolution of TODO properties are critical for maintaining code quality, project timelines, and overall system stability.\n- TODO properties should be clearly documented and communicated within the development team to avoid oversight.\n\n**Dependency chain**\n- TODO properties may depend on other modules, services, or API components that are under development or pending integration.\n- Often linked to external project management or issue tracking tools for assignment, prioritization, and progress monitoring.\n- The"},"hooks":{"type":"object","properties":{"batchSize":{"type":"number","description":"batchSize specifies the number of records or items to be processed in a single batch during the execution of the NetSuite RESTlet hook. This parameter helps control the workload size for each batch operation, optimizing performance and resource utilization by balancing processing efficiency and system constraints.\n\n**Field behavior**\n- Determines the maximum number of records or items processed in one batch cycle.\n- Directly influences the frequency and duration of batch processing operations.\n- Helps manage memory consumption and processing time by limiting the batch workload.\n- Affects overall throughput and latency of batch operations, impacting system responsiveness.\n- Controls how data is segmented and processed in discrete units during RESTlet execution.\n\n**Implementation guidance**\n- Configure batchSize based on the system’s processing capacity, expected data volume, and performance goals.\n- Use smaller batch sizes in environments with limited resources or strict execution time limits to prevent timeouts.\n- Larger batch sizes can improve throughput by reducing the number of batch cycles but may increase individual batch processing time and risk of hitting governance limits.\n- Always validate that batchSize is a positive integer greater than zero to ensure proper operation.\n- Take into account NetSuite API governance limits, such as usage units and execution time, when determining batchSize.\n- Monitor system performance and adjust batchSize dynamically if possible to optimize processing efficiency.\n- Ensure batchSize aligns with other batch-related configurations to maintain consistency and predictable behavior.\n\n**Examples**\n- batchSize: 100 — processes 100 records per batch, balancing throughput and resource use.\n- batchSize: 500 — processes 500 records per batch for higher throughput in robust environments.\n- batchSize: 10 — processes 10 records per batch for fine-grained control and minimal resource impact.\n- batchSize: 1 — processes records individually, useful for debugging or very resource-sensitive scenarios.\n\n**Important notes**\n- Excessively high batchSize values may cause processing timeouts, exceed NetSuite governance limits, or lead to memory exhaustion.\n- Very low batchSize values can result in inefficient processing due to increased overhead and more frequent batch invocations.\n- The optimal batchSize is context-dependent and should be determined through testing and monitoring.\n- batchSize should be consistent with other batch processing parameters to avoid conflicts or unexpected behavior.\n- Changes to batchSize may require adjustments in error handling and retry logic to accommodate different batch sizes.\n\n**Dependency chain**\n- Depends on the batch processing logic implemented within the RESTlet hook.\n- Influences and is influenced by"},"preSend":{"type":"object","properties":{"fileInternalId":{"type":"string","description":"fileInternalId: The unique internal identifier assigned to a file within the NetSuite system. This identifier is used to precisely reference and manipulate a specific file during API operations, particularly within pre-send processing hooks in RESTlets. It ensures accurate targeting of file resources by uniquely identifying files stored in the NetSuite file cabinet.\n\n**Field behavior**\n- Represents a unique numeric or alphanumeric identifier assigned by NetSuite to each file.\n- Used to retrieve, update, or reference a file during the preSend hook execution.\n- Must correspond to an existing file within the NetSuite file cabinet.\n- Immutable throughout the file’s lifecycle; remains constant unless the file is deleted and recreated.\n- Serves as a key reference for file-related operations in automated workflows and integrations.\n\n**Implementation guidance**\n- Always validate that the fileInternalId exists and is accessible before performing operations.\n- Use this ID to fetch file metadata, content, or perform updates within the preSend hook.\n- Implement error handling to manage cases where the fileInternalId does not correspond to a valid or accessible file.\n- Ensure that the executing user or integration has the necessary permissions to access the file referenced by this ID.\n- Avoid hardcoding this ID; retrieve dynamically when possible to maintain flexibility and accuracy.\n\n**Examples**\n- 12345\n- \"67890\"\n- \"file_98765\"\n\n**Important notes**\n- The fileInternalId is specific to each NetSuite account and environment; it is not globally unique across different accounts.\n- Do not expose this identifier publicly, as it may reveal sensitive internal system details.\n- Modifications to the file’s name, location, or metadata do not affect the internal ID.\n- This ID is essential for linking files reliably in automated processes, integrations, and RESTlet hooks.\n- Deleting and recreating a file will result in a new fileInternalId.\n\n**Dependency chain**\n- Depends on the existence of the file within the NetSuite file cabinet.\n- Requires appropriate permissions to access or manipulate the file.\n- Utilized within preSend hooks to reference files accurately during API operations.\n\n**Technical details**\n- Typically a numeric or alphanumeric string assigned by NetSuite upon file creation.\n- Stored internally within NetSuite’s database as the primary key for file records.\n- Used as a parameter in RESTlet API calls to identify and operate on specific files.\n- Immutable identifier that does not change unless the file is deleted and recreated.\n- Integral to"},"function":{"type":"string","description":"function: >\n  Specifies the name of the custom function to be invoked during the preSend hook phase in the NetSuite RESTlet integration. This function enables developers to implement custom logic for processing or modifying the request payload immediately before it is dispatched to the NetSuite RESTlet endpoint. It serves as a critical extension point for tailoring request data, adding headers, sanitizing inputs, or performing any preparatory steps necessary to meet integration requirements.\n\n  **Field behavior**\n  - Identifies the exact function to execute during the preSend hook phase.\n  - Allows customization and transformation of the outgoing request payload or context.\n  - The specified function is called synchronously or asynchronously depending on implementation support.\n  - Modifications made by this function directly affect the data sent to the NetSuite RESTlet.\n  - Must reference a valid, accessible function within the integration’s runtime environment.\n\n  **Implementation guidance**\n  - Confirm that the function name matches a defined and exported function within the integration codebase.\n  - The function should accept the current request payload or context as input and return the modified payload or context.\n  - Implement robust error handling within the function to avoid unhandled exceptions that could disrupt the request flow.\n  - Optimize the function for performance to minimize latency in request processing.\n  - If asynchronous operations are supported, ensure proper handling of promises or callbacks.\n  - Document the function’s behavior clearly to facilitate maintenance and future updates.\n\n  **Examples**\n  - \"sanitizePayload\" — cleans and validates request data before sending.\n  - \"addAuthenticationHeaders\" — injects necessary authentication tokens or headers.\n  - \"transformRequestData\" — restructures or enriches the payload to match API expectations.\n  - \"logRequestDetails\" — captures request metadata for auditing or debugging purposes.\n\n  **Important notes**\n  - The function must be correctly implemented and accessible; otherwise, runtime errors will occur.\n  - This hook executes immediately before the request is sent, so any changes here directly impact the outgoing data.\n  - Ensure that the function’s side effects do not unintentionally alter unrelated parts of the request or integration state.\n  - If asynchronous processing is used, verify that the integration framework supports it to avoid unexpected behavior.\n  - Testing the function thoroughly is critical to ensure reliable integration behavior.\n\n  **Dependency chain**\n  - Requires the preSend hook to be enabled and properly configured in the integration settings.\n  - Depends on the presence of the named"},"configuration":{"type":"object","description":"configuration: >\n  An object containing configuration settings that influence the behavior of the preSend hook in the NetSuite RESTlet integration. This object serves as a centralized control point for customizing how requests are processed and modified before being sent to the NetSuite RESTlet endpoint. It can include a variety of parameters such as authentication credentials, logging preferences, request modification flags, timeout settings, retry policies, and feature toggles that tailor the preSend hook’s operation to specific integration needs.\n  **Field behavior**\n  - Holds key-value pairs that define how the preSend hook processes and modifies outgoing requests.\n  - Can include settings such as authentication parameters (e.g., tokens, API keys), request modification flags (e.g., header adjustments), logging options (e.g., enable/disable logging), timeout durations, retry counts, and feature toggles.\n  - Is accessed and potentially updated dynamically during the execution of the preSend hook to adapt request handling based on current context or conditions.\n  - Influences the flow and outcome of the preSend hook, potentially altering request payloads, headers, or other metadata before transmission.\n  **Implementation guidance**\n  - Define clear, descriptive, and consistent keys within the configuration object to avoid ambiguity and ensure maintainability.\n  - Validate all configuration values rigorously before applying them to prevent runtime errors or unexpected behavior.\n  - Use this object to centralize control over preSend hook behavior, enabling easier updates, debugging, and feature management.\n  - Document all possible configuration options, their expected data types, default values, and their specific effects on the preSend hook’s operation.\n  - Ensure sensitive information within the configuration (e.g., authentication tokens) is handled securely, following best practices for encryption and access control.\n  - Consider versioning the configuration schema if multiple versions of the preSend hook or integration exist.\n  **Examples**\n  - `{ \"enableLogging\": true, \"authToken\": \"abc123\", \"modifyHeaders\": false }`\n  - `{ \"retryCount\": 3, \"timeout\": 5000 }`\n  - `{ \"useNonProduction\": true, \"customHeader\": \"X-Custom-Value\" }`\n  - `{ \"authenticationType\": \"OAuth2\", \"refreshToken\": \"xyz789\", \"logLevel\": \"verbose\" }`\n  - `{ \"enableCaching\": false, \"maxRetries\": 5, \"requestPriority\": \"high\" }`\n  **Important**"}},"description":"preSend is a hook function that is executed immediately before a RESTlet sends a response back to the client. It allows for last-minute modifications or logging of the response data, enabling customization of the output or performing additional processing steps prior to transmission. This hook provides a critical interception point to ensure the response adheres to business rules, compliance requirements, or client-specific formatting before it leaves the server.\n\n**Field behavior**\n- Invoked right before the RESTlet response is sent to the client.\n- Receives the response data as input and can modify it.\n- Can be used to log, audit, or transform the response payload.\n- Should return the final response object to be sent.\n- Supports both synchronous and asynchronous execution depending on the implementation.\n- Any changes made here directly impact the final output received by the client.\n\n**Implementation guidance**\n- Implement as a synchronous or asynchronous function depending on the environment and use case.\n- Ensure any modifications maintain the expected response format and data integrity.\n- Avoid long-running or blocking operations to prevent delaying the response delivery.\n- Handle errors gracefully within the hook to prevent disrupting the overall RESTlet response flow.\n- Validate the modified response to ensure it complies with API schema and client expectations.\n- Use this hook to enforce security measures such as masking sensitive data or adding audit trails.\n\n**Examples**\n- Adding a timestamp or metadata (e.g., request ID, processing duration) to the response object.\n- Masking or removing sensitive information (e.g., personal identifiers, confidential fields) from the response.\n- Logging response details for auditing or debugging purposes.\n- Transforming response data structure or formatting to match client-specific requirements.\n- Injecting additional headers or status information into the response payload.\n\n**Important notes**\n- This hook runs after all business logic but before the response is finalized and sent.\n- Modifications here directly affect what the client ultimately receives.\n- Errors thrown in this hook may cause the RESTlet to fail or return an error response.\n- Use this hook to enforce response-level policies, compliance, or data governance rules.\n- Avoid introducing side effects that could alter the idempotency or consistency of the response.\n- Testing this hook thoroughly is critical to ensure it does not unintentionally break client integrations.\n\n**Dependency chain**\n- Triggered after the main RESTlet processing logic completes and the response object is prepared.\n- Precedes the actual sending of the HTTP response to the client.\n- May depend on prior hooks or"}},"description":"hooks: >\n  An array of hook definitions that specify custom functions to be executed at various points during the lifecycle of the RESTlet script in NetSuite. These hooks enable developers to inject additional logic before or after standard processing events, allowing for extensive customization and extension of the RESTlet's behavior to meet specific business requirements.\n\n  **Field behavior**\n  - Defines one or more hooks that trigger custom code execution at designated lifecycle events.\n  - Hooks can be configured to run at standard lifecycle events such as beforeLoad, beforeSubmit, afterSubmit, or at custom-defined events tailored to specific needs.\n  - Each hook entry typically includes the event name, the callback function to execute, and optional parameters or context information.\n  - Supports both synchronous and asynchronous execution modes depending on the hook type and implementation context.\n  - Hooks execute in the order they are defined, allowing for controlled sequencing of custom logic.\n  - Hooks can modify input data, perform validations, log information, or alter output responses as needed.\n\n  **Implementation guidance**\n  - Ensure that each hook function is properly defined, accessible, and tested within the RESTlet script context to avoid runtime failures.\n  - Validate hook event names against the list of supported lifecycle events to prevent misconfiguration and errors.\n  - Use hooks to encapsulate reusable business logic, enforce data integrity, or integrate with external systems and services.\n  - Implement robust error handling within hook functions to prevent exceptions from disrupting the main RESTlet processing flow.\n  - Document each hook’s purpose, expected inputs, outputs, and side effects clearly to facilitate maintainability and future enhancements.\n  - Consider performance implications of hooks, especially those performing asynchronous operations or external calls, to maintain RESTlet responsiveness.\n  - When multiple hooks are defined for the same event, design them to avoid conflicts and ensure predictable outcomes.\n\n  **Examples**\n  - Defining a hook to validate and sanitize input data before processing a RESTlet request.\n  - Adding a hook to log detailed request and response information after the RESTlet completes execution for auditing purposes.\n  - Using a hook to modify or enrich the response payload dynamically before it is returned to the client application.\n  - Implementing a hook to trigger notifications or update related records asynchronously after data submission.\n  - Creating a custom hook event to perform additional security checks beyond standard validation.\n\n  **Important notes**\n  - Improper use or misconfiguration of hooks can lead to unexpected behavior, performance degradation, or runtime errors."},"cLocked":{"type":"object","description":"cLocked indicates whether the record is locked, preventing any modifications to its data.\n**Field behavior**\n- Represents the lock status of a record within the system.\n- When set to true, the record is locked and cannot be edited or updated.\n- When set to false, the record is unlocked and available for modifications.\n- Typically used to control concurrent access and maintain data integrity.\n**Implementation guidance**\n- Should be checked before performing update or delete operations on the record.\n- Setting this field to true should trigger UI or API restrictions on editing.\n- Ensure that only authorized users or processes can change the lock status.\n- Use this field to prevent race conditions or accidental data overwrites.\n**Examples**\n- cLocked: true — The record is locked and read-only.\n- cLocked: false — The record is unlocked and editable.\n**Important notes**\n- Locking a record does not necessarily prevent read access; it only restricts modifications.\n- The lock status may be temporary or permanent depending on business rules.\n- Changes to this field might require audit logging for compliance.\n**Dependency chain**\n- May depend on user permissions or roles to set or clear the lock.\n- Could be related to workflow states or approval processes that enforce locking.\n**Technical details**\n- Data type: Boolean.\n- Default value is typically false (unlocked).\n- Stored as a flag in the record metadata or status fields.\n- Changes to cLocked should be atomic to avoid inconsistent states."}},"description":"restlet: The identifier or URL of the NetSuite Restlet script to be invoked for performing custom server-side logic or data processing within the NetSuite environment. This property specifies which Restlet endpoint the integration or application should call to execute specific business logic, automate workflows, or retrieve and manipulate data dynamically. It can be represented as an internal script ID, a relative URL path, or a full external URL depending on the integration scenario and access method.\n\n**Field behavior**\n- Defines the specific target Restlet script or endpoint for API calls within the NetSuite environment.\n- Routes requests to custom server-side scripts developed using NetSuite’s SuiteScript framework.\n- Enables execution of tailored business processes, data validations, transformations, or integrations.\n- Supports various HTTP methods such as GET, POST, PUT, and DELETE depending on the Restlet’s implementation.\n- Can be specified as a script ID, a relative URL path, or a fully qualified URL based on deployment and access context.\n- Acts as the primary entry point for invoking custom logic that extends or complements standard NetSuite functionality.\n\n**Implementation guidance**\n- Confirm that the Restlet script is properly deployed, enabled, and accessible within the target NetSuite account.\n- Use the internal script ID format (e.g., \"customscript_my_restlet\") when calling via SuiteScript or internal APIs.\n- Use the relative URL path (e.g., \"/app/site/hosting/restlet.nl?script=123&deploy=1\") or full URL for external integrations or REST clients.\n- Verify that the Restlet supports the required HTTP methods and handles input/output data formats correctly (JSON, XML, etc.).\n- Secure the Restlet endpoint by implementing authentication mechanisms such as OAuth 2.0, token-based authentication, or NetSuite session credentials.\n- Implement robust error handling and retry logic to manage scenarios where the Restlet is unavailable or returns errors.\n- Test the Restlet thoroughly in a non-production environment before deploying to production to ensure expected behavior and security compliance.\n\n**Examples**\n- \"customscript_my_restlet\" (internal script ID used in SuiteScript calls)\n- \"/app/site/hosting/restlet.nl?script=123&deploy=1\" (relative URL for REST calls within NetSuite)\n- \"https://rest.netsuite.com/app/site/hosting/restlet.nl?script=456&deploy=2\" (full external URL for third-party integrations)\n- \"customscript_sales_order_processor\" (a Rest"},"distributed":{"type":"object","properties":{"recordType":{"type":"string","description":"The lowercase script ID of the NetSuite record type for the distributed export.\n\nMust be the exact lowercase script ID as defined in NetSuite (e.g., \"customer\", \"salesorder\", \"invoice\", \"vendorbill\").\nThis is NOT the display name - use the script ID which is always lowercase with no spaces.\n\n**Examples**\n- \"customer\"\n- \"invoice\"\n- \"salesorder\"\n- \"itemfulfillment\"\n- \"vendorbill\"\n- \"employee\"\n- \"purchaseorder\"\n- \"creditmemo\"\n\n**Important notes**\n- Must be lowercase script ID, not the display name\n- Custom record types use format \"customrecord_scriptid\""},"executionContext":{"type":"array","description":"An array of execution contexts that will trigger this distributed export.\n\nSpecifies which NetSuite execution contexts should trigger this export. When a record change occurs in one of the specified contexts, the export will be triggered.\n\n**Default value**\nIf not specified, defaults to: [\"userinterface\", \"webstore\"]\n\n**Valid values**\n- \"userinterface\" - User interactions in the NetSuite UI\n- \"webservices\" - SOAP web services calls\n- \"csvimport\" - CSV import operations\n- \"offlineclient\" - Offline client synchronization\n- \"portlet\" - Portlet interactions\n- \"scheduled\" - Scheduled script executions\n- \"suitelet\" - Suitelet executions\n- \"custommassupdate\" - Custom mass update operations\n- \"workflow\" - Workflow actions\n- \"webstore\" - Web store transactions\n- \"userevent\" - User event script triggers\n- \"mapreduce\" - Map/Reduce script operations\n- \"restlet\" - RESTlet API calls\n- \"webapplication\" - Web application interactions\n- \"restwebservices\" - REST web services calls\n\n**Example**\n```json\n[\"userinterface\", \"webstore\"]\n```","default":["userinterface","webstore"],"items":{"type":"string","enum":["userinterface","webservices","csvimport","offlineclient","portlet","scheduled","suitelet","custommassupdate","workflow","webstore","userevent","mapreduce","restlet","webapplication","restwebservices"]}},"disabled":{"type":"boolean","description":"disabled: Indicates whether the distributed feature in NetSuite is disabled or not. This boolean flag controls the availability and operational status of the distributed functionalities within the NetSuite integration, allowing administrators or systems to enable or disable these features as needed.\n\n**Field behavior**\n- When set to true, the distributed feature is fully disabled, preventing any distributed operations or workflows from executing.\n- When set to false or omitted, the distributed feature remains enabled and fully operational.\n- Acts as a toggle switch to control the accessibility of distributed capabilities within the NetSuite environment.\n- Changes to this flag directly influence the behavior of distributed-related processes and integrations.\n\n**Implementation guidance**\n- Use a boolean value: `true` to disable the distributed feature, `false` to enable it.\n- Before disabling, verify that no critical processes depend on distributed functionality to avoid disruptions.\n- Implement validation checks to confirm the current state before initiating distributed operations.\n- Provide clear user notifications or system logs when the feature is disabled to aid in troubleshooting and auditing.\n- Consider the impact on dependent modules and ensure coordinated updates if disabling this feature.\n\n**Examples**\n- `disabled: true`  # The distributed feature is turned off, disabling all related operations.\n- `disabled: false` # The distributed feature is active and available for use.\n- Omitted `disabled` property defaults to `false`, enabling the feature by default.\n\n**Important notes**\n- Disabling this feature may interrupt workflows or processes that rely on distributed capabilities, potentially causing failures or delays.\n- Some systems may require a restart or reinitialization after changing this setting for the change to take full effect.\n- Modifying this property should be restricted to users with appropriate permissions to prevent unauthorized disruptions.\n- Always assess the broader impact on the NetSuite integration before toggling this flag.\n\n**Dependency chain**\n- Directly affects modules and properties that rely on distributed functionality within the NetSuite integration.\n- Should be checked and respected by any API calls, workflows, or processes that involve distributed features.\n- May influence error handling and fallback mechanisms in distributed-related operations.\n\n**Technical details**\n- Data type: Boolean\n- Default value: `false` (distributed feature enabled)\n- Located under the `netsuite.distributed` namespace in the API schema\n- Changing this property triggers state changes in distributed feature availability within the system"},"executionType":{"type":"array","description":"An array of record operation types that will trigger this distributed export.\n\nSpecifies which types of record operations should trigger the export. When a record operation matches one of the specified types, the export will be triggered.\n\n**Default value**\nIf not specified, defaults to: [\"create\", \"edit\", \"xedit\"]\n\n**Valid values**\n- \"create\" - New record creation\n- \"edit\" - Record editing via UI\n- \"delete\" - Record deletion\n- \"xedit\" - Inline editing (edit without opening the record)\n- \"copy\" - Record copy operation\n- \"view\" - Record view\n- \"cancel\" - Transaction cancellation\n- \"approve\" - Approval action\n- \"reject\" - Rejection action\n- \"pack\" - Pack operation (fulfillment)\n- \"ship\" - Ship operation (fulfillment)\n- \"markcomplete\" - Mark as complete\n- \"reassign\" - Reassignment action\n- \"editforecast\" - Forecast editing\n- \"dropship\" - Drop ship operation\n- \"specialorder\" - Special order operation\n- \"orderitems\" - Order items action\n- \"paybills\" - Pay bills action\n- \"print\" - Print action\n- \"email\" - Email action\n\n**Example**\n```json\n[\"create\", \"edit\", \"xedit\"]\n```","default":["create","edit","xedit"],"items":{"type":"string","enum":["create","edit","delete","xedit","copy","view","cancel","approve","reject","pack","ship","markcomplete","reassign","editforecast","dropship","specialorder","orderitems","paybills","print","email"]}},"qualifier":{"type":"object","description":"qualifier: A string value used to specify a particular qualifier or modifier that further defines, categorizes, or scopes the associated data within the NetSuite distributed context. This property enables more granular identification, filtering, and processing of data by applying specific criteria or attributes relevant to business logic, integration workflows, or operational requirements. It serves as an optional but powerful tool to distinguish data subsets, enhance data semantics, and support conditional handling in distributed NetSuite environments.\n\n**Field behavior**\n- Acts as an additional identifier or modifier to refine the meaning, scope, or classification of the associated data.\n- Enables filtering, categorization, or qualification of data entries in distributed NetSuite operations based on specific business rules.\n- Typically optional but may be mandatory in certain contexts or API endpoints where precise data segmentation is required.\n- Accepts string values that correspond to predefined, standardized, or custom qualifiers recognized by the system or integration layer.\n- Supports multiple use cases including regional segmentation, priority tagging, type classification, and channel identification.\n\n**Implementation guidance**\n- Ensure the qualifier value strictly aligns with the accepted set of qualifiers defined in the business domain, integration specifications, or system configuration.\n- Implement validation mechanisms to verify that the qualifier string matches allowed or expected values to avoid errors, misclassification, or unintended behavior.\n- Adopt consistent naming conventions and formatting standards (e.g., lowercase, hyphen-separated) for qualifiers to maintain clarity, readability, and interoperability across systems.\n- Maintain comprehensive documentation of all custom and standard qualifiers used, including their intended meaning and usage scenarios, to facilitate maintenance, troubleshooting, and future integrations.\n- Consider the impact of qualifiers on downstream processing, reporting, and analytics to ensure they are leveraged effectively and do not introduce ambiguity.\n\n**Examples**\n- \"region-us\" to specify data related to the United States region.\n- \"priority-high\" to indicate transactions or records with high priority status.\n- \"type-inventory\" to qualify records associated with inventory management.\n- \"channel-online\" to denote sales or operations conducted through online channels.\n- \"segment-enterprise\" to classify data pertaining to enterprise-level customers.\n- \"status-active\" to filter or identify active records within a dataset.\n\n**Important notes**\n- The qualifier should be meaningful, contextually relevant, and aligned with the business logic to ensure accurate data interpretation.\n- Incorrect, inconsistent, or ambiguous qualifiers can lead to data misinterpretation, processing errors, or integration failures.\n- The property may interact with other filtering"},"skipExportFieldId":{"type":"string","description":"skipExportFieldId is an identifier for a specific field within the NetSuite distributed configuration that determines whether certain data should be excluded from export processes. It serves as a control mechanism to selectively omit data associated with particular fields during export operations, enabling tailored and efficient data handling.\n\n**Field behavior:**  \n- Acts as a flag or marker to skip exporting data associated with the specified field ID.  \n- When set, the export routines will omit the data linked to this field from being included in the export payload.  \n- Helps control and customize the export behavior on a per-field basis within distributed NetSuite configurations.  \n- Does not affect data visibility or storage within NetSuite; it only influences export output.  \n- Supports multiple uses in scenarios where sensitive, redundant, or irrelevant data should be excluded from exports.\n\n**Implementation guidance:**  \n- Ensure the field ID provided corresponds to a valid and existing field within the NetSuite schema to prevent export errors.  \n- Use this property to optimize export operations by excluding unnecessary or sensitive data fields, improving performance and compliance.  \n- Validate the field ID format and existence before applying it to avoid runtime issues during export.  \n- Integrate with export logic to check this property before including fields in the export output, ensuring consistent behavior.  \n- Consider maintaining a centralized list or configuration of skipExportFieldIds for easier management and auditing.  \n\n**Examples:**  \n- skipExportFieldId: \"custbody_internal_notes\" (skips exporting the internal notes custom field)  \n- skipExportFieldId: \"item_custom_field_123\" (excludes a specific item custom field from export)  \n- skipExportFieldId: \"custentity_sensitive_data\" (prevents export of sensitive customer entity data)  \n\n**Important notes:**  \n- This property only affects export operations and does not alter data storage or visibility within NetSuite.  \n- Misconfiguration may lead to incomplete data exports if critical fields are skipped unintentionally, potentially impacting downstream processes.  \n- Should be used judiciously to maintain data integrity and compliance with business rules and regulatory requirements.  \n- Changes to this property should be documented and reviewed to avoid unintended data omissions.  \n\n**Dependency chain**\n- Depends on the existence and validity of the specified field ID within the NetSuite schema.  \n- Relies on export routines to check and respect this property during data export processes.  \n- May interact with other export configuration settings that control data inclusion/exclusion."},"hooks":{"type":"object","properties":{"preSend":{"type":"object","properties":{"fileInternalId":{"type":"string","description":"fileInternalId: The unique internal identifier assigned to a file within the NetSuite system. This identifier is essential for accurately referencing and manipulating a specific file during various operations such as retrieval, update, or deletion within NetSuite's environment. It acts as a primary key that ensures precise targeting of files in automated workflows, scripts, and API calls.\n\n**Field behavior**\n- Serves as a unique and immutable key to identify a file in the NetSuite file cabinet.\n- Utilized in pre-send hooks and other automation points to specify the exact file being processed or referenced.\n- Must correspond to an existing file’s internal ID within the NetSuite account to ensure valid operations.\n- Enables consistent and reliable file operations by linking actions directly to the file’s system-assigned identifier.\n\n**Implementation guidance**\n- Always ensure the value is a valid integer that corresponds to an existing file’s internal ID in NetSuite.\n- Validate the internal ID before performing any file operations to avoid runtime errors or failed transactions.\n- Use this ID when invoking NetSuite APIs, SuiteScript, or other integration points to fetch, update, or delete files.\n- Avoid hardcoding the internal ID; instead, dynamically retrieve it through queries or API calls to maintain adaptability and reduce maintenance overhead.\n- Handle exceptions gracefully when the ID does not correspond to any file, providing meaningful error messages or fallback logic.\n\n**Examples**\n- 12345\n- 987654\n- 1001\n\n**Important notes**\n- The internal ID is system-generated by NetSuite and guaranteed to be unique within the account.\n- This ID is distinct from file names, external URLs, or folder identifiers and should not be confused with them.\n- Using an incorrect or non-existent internal ID will cause operations to fail, potentially interrupting workflows.\n- The internal ID remains constant for the lifetime of the file and does not change even if the file is moved or renamed.\n\n**Dependency chain**\n- Depends on the file existing in the NetSuite file cabinet prior to referencing.\n- Often used alongside related properties such as file name, folder ID, file type, or metadata to provide context or additional filtering.\n- May be required input for downstream processes that manipulate or validate file contents.\n\n**Technical details**\n- Represented as an integer value assigned by NetSuite upon file creation.\n- Immutable once assigned; cannot be altered or reassigned to a different file.\n- Used internally by NetSuite APIs, SuiteScript, and integration"},"function":{"type":"string","description":"function: >\n  Specifies the name of the custom function to be executed as a pre-send hook within the NetSuite distributed system. This function is invoked immediately before sending data or requests, allowing for custom processing, validation, or modification of the payload to ensure data integrity and compliance with business rules.\n\n  **Field behavior**\n  - Defines the exact function to be called prior to sending data or requests.\n  - Enables interception, inspection, and manipulation of data before transmission.\n  - Supports integration of custom business logic, validation, enrichment, or logging steps.\n  - Must reference a valid, accessible function within the current execution context or environment.\n  - The function’s execution outcome can influence whether the sending process proceeds, is modified, or is aborted.\n\n  **Implementation guidance**\n  - Ensure the function name corresponds exactly to a defined function in the codebase, script environment, or registered hooks.\n  - The function should accept the expected input parameters (such as the payload or context) and return appropriate results or modifications.\n  - Implement robust error handling within the function to prevent unhandled exceptions that could disrupt the sending workflow.\n  - Document the function’s purpose, input/output contract, and side effects clearly for maintainability and future reference.\n  - Validate that the function executes efficiently and completes promptly to avoid introducing latency or blocking the sending process.\n  - If asynchronous operations are necessary, ensure they are properly awaited or handled to guarantee completion before sending.\n  - Follow consistent naming conventions aligned with the overall codebase or organizational standards.\n\n  **Examples**\n  - \"validateCustomerData\"\n  - \"sanitizePayloadBeforeSend\"\n  - \"logPreSendActivity\"\n  - \"customAuthorizationCheck\"\n  - \"enrichOrderDetails\"\n  - \"checkInventoryAvailability\"\n\n  **Important notes**\n  - The function must be synchronous or correctly handle asynchronous behavior to ensure it completes before the send operation proceeds.\n  - If the function throws an error or returns a failure state, it may block, modify, or abort the sending process depending on the implementation.\n  - Avoid performing long-running or blocking operations within the function to maintain system responsiveness.\n  - The function should not perform irreversible side effects unless explicitly intended, as it runs prior to data transmission.\n  - Ensure the function does not introduce security vulnerabilities, such as exposing sensitive data or allowing injection attacks.\n  - Consistent and clear error reporting within the function aids in troubleshooting and operational monitoring.\n\n  **Dependency**"},"configuration":{"type":"object","description":"configuration: >\n  Configuration settings for the preSend hook in the NetSuite distributed system.\n  This property defines the parameters and options that control the behavior and execution of the preSend hook, allowing customization of how data is processed before being sent.\n  It enables fine-tuning of operational aspects such as retries, timeouts, validation rules, logging, and payload constraints to ensure reliable and efficient data transmission.\n  **Field behavior**\n  - Specifies the customizable settings that dictate how the preSend hook operates.\n  - Controls data manipulation, validation, and preparation steps prior to sending.\n  - Can include flags, thresholds, retry policies, timeout durations, logging options, and other operational parameters.\n  - May be optional or mandatory depending on the specific implementation and requirements of the preSend hook.\n  - Supports nested configuration objects to allow detailed and structured settings.\n  **Implementation guidance**\n  - Define clear, well-documented configuration options that directly impact the preSend process.\n  - Validate all configuration values rigorously to ensure they conform to expected data types, ranges, and formats.\n  - Provide sensible default values for optional parameters to enhance usability and reduce configuration errors.\n  - Ensure backward compatibility when extending or modifying configuration options.\n  - Include comprehensive documentation for each configuration parameter, including its purpose, accepted values, and effect on hook behavior.\n  - Consider security implications when allowing configuration of headers or other sensitive parameters.\n  **Examples**\n  - `{ \"retryCount\": 3, \"timeout\": 5000, \"enableLogging\": true }`\n  - `{ \"validateSchema\": true, \"maxPayloadSize\": 1048576 }`\n  - `{ \"customHeaders\": { \"X-Custom-Header\": \"value\" } }`\n  - `{ \"retryPolicy\": { \"maxAttempts\": 5, \"backoffStrategy\": \"exponential\" }, \"enableLogging\": false }`\n  - `{ \"payloadCompression\": \"gzip\", \"timeout\": 10000 }`\n  **Important notes**\n  - Incorrect or invalid configuration values can cause the preSend hook to fail or behave unpredictably.\n  - Thorough testing of configuration changes in development or staging environments is critical before deploying to production.\n  - Some configuration changes may require restarting or reinitializing the hook or related services to take effect.\n  - Sensitive configuration parameters should be handled securely to prevent exposure of confidential information.\n  - Configuration should be version-controlled and documented to facilitate maintenance"}},"description":"preSend is a hook function that is invoked immediately before a request is sent to the NetSuite API. It allows for custom processing, modification, or validation of the request payload and headers, enabling dynamic adjustments or logging prior to transmission.\n\n**Field behavior**\n- Executed synchronously or asynchronously just before the API request is dispatched to the NetSuite endpoint.\n- Receives the full request object, including headers, body, query parameters, and other relevant metadata.\n- Permits modification of any part of the request, such as altering headers, adjusting the payload, or changing query parameters.\n- Supports validation logic to ensure the request meets required criteria; throwing an error will abort the request.\n- Enables injection of dynamic data like authentication tokens, custom headers, or correlation IDs.\n- Can be used for logging or auditing outgoing request details for debugging or monitoring purposes.\n\n**Implementation guidance**\n- Implement as a function or asynchronous callback that accepts the request context object.\n- Ensure that any asynchronous operations within the hook are properly awaited to maintain request integrity.\n- Keep processing lightweight to avoid introducing latency or blocking the request pipeline.\n- Handle exceptions carefully; unhandled errors will prevent the request from being sent.\n- Centralize request customization logic here to improve maintainability and reduce duplication.\n- Avoid side effects that could impact other parts of the system or subsequent requests.\n- Validate inputs thoroughly to prevent malformed requests from being sent to the API.\n\n**Examples**\n- Adding a Bearer token or API key to the Authorization header dynamically before sending.\n- Logging the complete request payload and headers for troubleshooting network issues.\n- Modifying request parameters based on user roles or feature flags at runtime.\n- Validating that required fields are present and correctly formatted, throwing an error if validation fails.\n- Adding a unique request ID header for tracing requests across distributed systems.\n\n**Important notes**\n- This hook executes on every outgoing request, so its performance impact should be minimized.\n- Any modifications made within preSend directly affect the final request sent to NetSuite.\n- Throwing an error inside this hook will abort the request and propagate the error upstream.\n- This hook is strictly for pre-request processing and should not be used for handling responses.\n- Avoid making network calls or heavy computations inside this hook to prevent delays.\n- Ensure thread safety if the hook accesses shared resources or global state.\n\n**Dependency chain**\n- Invoked after request construction but before the request is dispatched.\n- Precedes any network transmission or retry"}},"description":"hooks: >\n  A collection of user-defined functions or callbacks that are executed at specific points during the lifecycle of the distributed process within the NetSuite integration. These hooks enable customization and extension of the default behavior by injecting custom logic before, during, or after key operations, allowing for flexible adaptation to unique business requirements and integration scenarios.\n\n  **Field behavior**\n  - Contains one or more functions or callback references mapped to specific lifecycle events.\n  - Each hook corresponds to a distinct event or stage in the distributed process, such as pre-processing, post-processing, error handling, or data transformation.\n  - Hooks are invoked automatically by the system at predefined points in the workflow.\n  - Can modify data payloads, trigger additional workflows or external API calls, perform validations, or handle errors.\n  - Supports both synchronous and asynchronous execution models depending on the hook’s purpose and implementation.\n  - Execution order of hooks for the same event is deterministic and should be documented.\n  - Hooks should be designed to avoid side effects that could impact other parts of the process.\n\n  **Implementation guidance**\n  - Define hooks as named functions or references to executable code blocks compatible with the integration environment.\n  - Ensure hooks are idempotent to prevent unintended consequences from repeated or retried executions.\n  - Validate all inputs and outputs rigorously within hooks to maintain data integrity and system stability.\n  - Use hooks to integrate with external systems, perform custom validations, enrich data, or implement business-specific logic.\n  - Document each hook’s purpose, expected inputs, outputs, and any side effects clearly for maintainability.\n  - Implement robust error handling within hooks to gracefully manage exceptions without disrupting the main process flow.\n  - Test hooks thoroughly in isolated and integrated environments to ensure reliability and performance.\n  - Consider security implications, such as data exposure or injection risks, when implementing hooks.\n\n  **Examples**\n  - A hook that validates transaction data before it is sent to NetSuite to ensure compliance with business rules.\n  - A hook that logs detailed transaction metadata after a successful operation for auditing purposes.\n  - A hook that modifies or enriches payload data during transformation stages to align with NetSuite’s schema.\n  - A hook that triggers email or system notifications upon error occurrences to alert support teams.\n  - A hook that retries failed operations with exponential backoff to improve resilience.\n\n  **Important notes**\n  - Improperly implemented hooks can cause process failures, data inconsistencies, or performance degradation."},"sublists":{"type":"object","description":"sublists: >\n  A collection of related sublist objects associated with the main record, representing grouped sets of data entries that provide additional details or linked information within the NetSuite distributed record context. These sublists enable the organization of complex, hierarchical data by encapsulating related records or line items that belong to the primary record, facilitating detailed data management and interaction within the system.\n  **Field behavior**\n  - Contains multiple sublist entries, each representing a distinct group of related data tied to the main record.\n  - Organizes and structures complex record information into manageable, logically grouped sections.\n  - Supports nested and hierarchical data representation, allowing for detailed and granular record composition.\n  - Typically handled as arrays or lists of sublist objects, supporting iteration and manipulation.\n  - Reflects one-to-many relationships inherent in NetSuite records, such as line items or related entities.\n  **Implementation guidance**\n  - Ensure each sublist object strictly adheres to the defined schema and data types for its specific sublist type.\n  - Validate the consistency and referential integrity of sublist data in relation to the main record to prevent data anomalies.\n  - Design for dynamic handling of sublists, accommodating varying sizes including empty or large collections.\n  - Implement robust CRUD (Create, Read, Update, Delete) operations for sublist entries to maintain accurate and up-to-date data.\n  - Consider transactional integrity when modifying sublists to ensure changes are atomic and consistent.\n  **Examples**\n  - A sales order record containing sublists for item lines detailing products, quantities, and prices; shipping addresses specifying delivery locations; and payment schedules outlining installment plans.\n  - An employee record with sublists for dependents listing family members; employment history capturing previous roles and durations; and certifications documenting professional qualifications.\n  - A customer record including sublists for contacts with communication details; transactions recording purchase history; and communication logs tracking interactions and notes.\n  **Important notes**\n  - Sublists are critical for accurately modeling one-to-many relationships within NetSuite records, enabling detailed data capture and reporting.\n  - Modifications to sublists can trigger business logic, workflows, or validations that affect overall record processing.\n  - Maintaining synchronization between sublists and the main record is essential to preserve data integrity and prevent inconsistencies.\n  - Performance considerations should be taken into account when handling large sublists to optimize system responsiveness.\n  **Dependency chain**\n  - Depends on the main record schema"},"referencedFields":{"type":"object","description":"referencedFields: >\n  A list of field identifiers that are referenced within the current context, typically used to denote dependencies or relationships between fields in a NetSuite distributed environment. This property helps in mapping out how different fields interact or rely on each other, facilitating data integrity, validation, and synchronization across distributed components or services.\n  **Field behavior**\n  - Contains identifiers of fields that the current field or process depends on or interacts with.\n  - Used to establish explicit relationships or dependencies between multiple fields.\n  - Enables tracking of data flow and ensures consistency across distributed systems.\n  - Supports dynamic resolution of dependencies during runtime or configuration.\n  **Implementation guidance**\n  - Populate with valid and existing field identifiers as defined in the NetSuite schema or metadata.\n  - Verify that all referenced fields are accessible and correctly scoped within the current context.\n  - Use this property to manage dependencies critical for data validation, synchronization, or processing logic.\n  - Keep the list updated to reflect any schema changes to avoid broken references or inconsistencies.\n  - Avoid circular references by carefully managing dependencies between fields.\n  **Examples**\n  - [\"customerId\", \"orderDate\", \"shippingAddress\"]\n  - [\"invoiceNumber\", \"paymentStatus\"]\n  - [\"productCode\", \"inventoryLevel\", \"reorderThreshold\"]\n  **Important notes**\n  - Referenced fields must be unique within the list to prevent redundancy and confusion.\n  - Modifications to referenced fields can impact dependent processes; changes should be tested thoroughly.\n  - This property contains only the identifiers (names or keys) of fields, not their actual data or values.\n  - Proper documentation of referenced fields improves maintainability and clarity of dependencies.\n  **Dependency chain**\n  - Often linked with fields that require validation or data aggregation from other fields.\n  - May influence or be influenced by business rules, workflows, or automation scripts that depend on multiple fields.\n  - Changes in referenced fields can cascade to affect dependent fields or processes.\n  **Technical details**\n  - Data type: Array of strings.\n  - Each string represents a unique field identifier within the NetSuite distributed environment.\n  - The array should be serialized in a format compatible with the consuming system (e.g., JSON array).\n  - Maximum length and allowed characters for field identifiers should conform to NetSuite naming conventions."},"relatedLists":{"type":"object","description":"relatedLists: A collection of related list objects that represent associated records or entities linked to the primary record within the NetSuite distributed data model. These related lists provide contextual information and enable navigation to connected data, facilitating comprehensive data retrieval and management. Each related list encapsulates a set of records that share a defined relationship with the primary record, such as transactions, contacts, or custom entities, thereby supporting a holistic view of the data ecosystem.\n\n**Field behavior**\n- Contains multiple related list entries, each representing a distinct association to the primary record.\n- Enables retrieval of linked records such as transactions, custom records, subsidiary data, or other relevant entities.\n- Supports hierarchical or relational data structures by referencing related entities, allowing nested or multi-level associations.\n- Typically read-only in the context of distributed data retrieval but may support updates or synchronization depending on API capabilities and permissions.\n- May include metadata such as record counts, last updated timestamps, or status indicators for each related list.\n- Supports dynamic inclusion or exclusion based on user permissions, record type, and system configuration.\n\n**Implementation guidance**\n- Populate with relevant related list objects that are directly associated with the primary record, ensuring accurate representation of relationships.\n- Ensure each related list entry includes unique identifiers, descriptive metadata, and navigation links or references necessary for data access and traversal.\n- Maintain consistency in naming conventions, data structures, and field formats to align with NetSuite’s standard data model and API specifications.\n- Implement pagination, filtering, or sorting mechanisms to efficiently handle large sets of related records within each list.\n- Validate all references and links to ensure data integrity, preventing broken or stale connections within the distributed data environment.\n- Consider caching strategies or incremental updates to optimize performance when dealing with frequently accessed related lists.\n- Respect and enforce access control and permission checks to ensure users only see related lists they are authorized to access.\n\n**Examples**\n- A customer record’s relatedLists might include “Transactions” (e.g., sales orders, invoices), “Contacts” (associated individuals), and “Cases” (customer support tickets).\n- An invoice record’s relatedLists could contain “Payments” (payment records), “Shipments” (delivery details), and “Adjustments” (billing corrections).\n- A custom record type might have relatedLists such as “Attachments” (files linked to the record) or “Notes” (user comments or annotations).\n- A vendor record’s relatedLists may include “Purchase Orders,” “Bills,” and “Vendor Contacts"},"forceReload":{"type":"boolean","description":"forceReload: Indicates whether the system should forcibly reload the data or configuration, bypassing any cached or stored versions to ensure the most up-to-date information is used. This flag is critical in scenarios where data accuracy and freshness are paramount, such as after configuration changes or data updates that must be immediately reflected.\n\n**Field behavior:**\n- When set to true, the system bypasses all caches and reloads data or configurations directly from the primary source, ensuring the latest state is retrieved.\n- When set to false or omitted, the system may utilize cached or previously stored data to optimize performance and reduce load times.\n- Primarily used in contexts where stale data could lead to errors, inconsistencies, or outdated processing results.\n- The reload operation triggered by this flag typically involves invalidating caches and refreshing dependent components or services.\n\n**Implementation guidance:**\n- Use this flag judiciously to balance between data freshness and system performance, avoiding unnecessary reloads that could degrade responsiveness.\n- Ensure that enabling forceReload initiates a comprehensive refresh cycle, including clearing relevant caches and reinitializing configuration or data layers.\n- Implement robust error handling during the reload process to manage potential failures without causing system downtime or inconsistent states.\n- Monitor system resource utilization and response times when forceReload is active to identify and mitigate performance bottlenecks.\n- Document scenarios and triggers for using forceReload to guide developers and operators in its appropriate application.\n\n**Examples:**\n- forceReload: true  \n  (forces the system to bypass caches and reload data/configuration from the authoritative source immediately)\n- forceReload: false  \n  (allows the system to serve data from cache if available, improving response time)\n- forceReload omitted  \n  (defaults to false behavior, relying on cached data unless otherwise specified)\n\n**Important notes:**\n- Excessive or unnecessary use of forceReload can lead to increased latency, higher resource consumption, and potential service degradation.\n- This flag does not validate the correctness or integrity of the source data; it only ensures the latest available data is fetched.\n- Downstream systems or processes should be designed to handle the potential delays or transient states caused by forced reloads.\n- Coordination with cache invalidation policies and data synchronization mechanisms is essential to maintain overall system consistency.\n\n**Dependency chain:**\n- Relies on underlying cache management and invalidation frameworks to effectively bypass stored data.\n- Interacts with data retrieval modules, configuration loaders, and possibly distributed synchronization services.\n- May trigger logging, monitoring, or"},"ioEnvironment":{"type":"string","description":"ioEnvironment specifies the input/output environment configuration for the NetSuite distributed system, defining how data is handled, processed, and routed across different operational environments. This property determines the context in which I/O operations occur, influencing data flow, security protocols, performance characteristics, and consistency guarantees within the distributed architecture.\n\n**Field behavior**\n- Determines the operational context for all input/output processes within the distributed NetSuite system.\n- Influences how data is read from and written to various storage systems, message queues, or communication channels.\n- Affects performance tuning, security measures, and data consistency mechanisms based on the selected environment.\n- Typically set during system initialization or configuration phases and remains stable during runtime to ensure predictable behavior.\n- May trigger environment-specific logging, monitoring, and error-handling strategies.\n\n**Implementation guidance**\n- Validate the ioEnvironment value against a predefined set of supported environments such as \"development,\" \"staging,\" \"production,\" and any custom configurations.\n- Ensure that the selected environment is compatible with other system settings related to data handling, network communication, and security policies.\n- Implement robust error handling and fallback mechanisms to manage unsupported or invalid environment values gracefully.\n- Clearly document the operational implications, limitations, and recommended use cases for each environment option to guide system administrators and developers.\n- Coordinate environment settings across all distributed nodes to maintain consistency and prevent configuration drift.\n\n**Examples**\n- \"development\" — used for local testing and debugging with relaxed security and simplified data handling.\n- \"staging\" — a pre-production environment that closely mirrors production settings for validation and testing.\n- \"production\" — the live environment optimized for security, performance, and data integrity.\n- \"custom\" — user-defined environment configurations tailored for specialized I/O requirements or experimental setups.\n\n**Important notes**\n- Changing the ioEnvironment typically requires restarting services or reinitializing connections to apply new configurations.\n- The environment setting directly impacts data integrity, access controls, and compliance with security policies.\n- Sensitive data must be handled according to the security standards appropriate for the selected environment.\n- Consistency across all distributed nodes is critical; all nodes should be configured with compatible ioEnvironment values to avoid data inconsistencies or communication failures.\n- Misconfiguration can lead to degraded performance, security vulnerabilities, or data loss.\n\n**Dependency chain**\n- Depends on system initialization and configuration management components.\n- Interacts with data storage modules, network communication layers, and security frameworks.\n- Influences logging, monitoring, and error"},"ioDomain":{"type":"string","description":"ioDomain specifies the Internet domain name used for input/output operations within the distributed NetSuite environment. This domain is critical for routing data requests and responses between distributed components and services, ensuring seamless communication and integration across the system.\n\n**Field behavior**\n- Defines the domain name utilized for network communication in distributed NetSuite environments.\n- Serves as the base domain for constructing URLs for API calls, data synchronization, and service endpoints.\n- Must be a valid, fully qualified domain name (FQDN) adhering to DNS standards.\n- Typically remains consistent within a deployment environment but can differ across environments such as development, staging, and production.\n- Influences routing, load balancing, and failover mechanisms within distributed services.\n\n**Implementation guidance**\n- Verify that the domain is properly configured in DNS and is resolvable by all distributed components.\n- Validate the domain format against standard domain naming conventions (e.g., RFC 1035).\n- Ensure the domain supports secure communication protocols (e.g., HTTPS with valid SSL/TLS certificates).\n- Coordinate updates to ioDomain with network, security, and operations teams to maintain service continuity.\n- When migrating or scaling services, update ioDomain accordingly and propagate changes to all dependent components.\n- Monitor domain accessibility and performance to detect and resolve connectivity issues promptly.\n\n**Examples**\n- \"api.netsuite.com\"\n- \"distributed-services.companydomain.com\"\n- \"staging-netsuite.io.company.com\"\n- \"eu-west-1.api.netsuite.com\"\n- \"dev-networks.internal.company.com\"\n\n**Important notes**\n- Incorrect or misconfigured ioDomain values can cause failed network requests, service interruptions, and data synchronization errors.\n- The domain must support necessary security certificates to enable encrypted communication and protect data in transit.\n- Changes to ioDomain may necessitate updates to firewall rules, proxy configurations, and network security policies.\n- Consistency in ioDomain usage across distributed components is essential to avoid routing conflicts and authentication issues.\n- Consider the impact on caching, CDN configurations, and DNS propagation delays when changing ioDomain.\n\n**Dependency chain**\n- Dependent on underlying network infrastructure, DNS setup, and domain registration.\n- Utilized by distributed service components for constructing communication endpoints.\n- May affect authentication and authorization workflows that rely on domain validation or origin verification.\n- Interacts with security components such as SSL/TLS certificate management and firewall configurations.\n- Influences monitoring, logging, and troubleshooting processes related to network communication."},"lastSyncedDate":{"type":"string","format":"date-time","description":"lastSyncedDate represents the precise date and time when the data was last successfully synchronized between the system and NetSuite. This timestamp is essential for monitoring the freshness, consistency, and integrity of synchronized data, enabling systems to determine whether updates or incremental syncs are necessary.\n\n**Field behavior**\n- Captures the exact date and time of the most recent successful synchronization event.\n- Automatically updates only after a sync operation completes successfully without errors.\n- Serves as a reference point to assess if data is current or requires refreshing.\n- Typically stored and transmitted in ISO 8601 format to maintain uniformity across different systems and platforms.\n- Does not reflect the start time or duration of the synchronization process, only its successful completion.\n\n**Implementation guidance**\n- Record the timestamp in Coordinated Universal Time (UTC) to prevent timezone-related inconsistencies.\n- Update this field exclusively after confirming a successful synchronization to avoid misleading data states.\n- Validate the date and time format rigorously to comply with ISO 8601 standards (e.g., \"YYYY-MM-DDTHH:mm:ssZ\").\n- Utilize this timestamp to drive incremental synchronization logic, data refresh triggers, or audit trails in downstream workflows.\n- Handle cases where the field may be null or missing, indicating that no synchronization has occurred yet.\n\n**Examples**\n- \"2024-06-15T14:30:00Z\"\n- \"2023-12-01T08:45:22Z\"\n- \"2024-01-10T23:59:59Z\"\n\n**Important notes**\n- This timestamp marks the completion of synchronization, not its initiation.\n- Do not update this field if the synchronization process fails or is incomplete.\n- Maintaining timezone consistency (UTC) is critical to avoid synchronization conflicts or data mismatches.\n- The field may be null or omitted if synchronization has never been performed.\n- Systems relying on this field should implement fallback or error handling for missing or invalid timestamps.\n\n**Dependency chain**\n- Depends on successful completion of the synchronization process between the system and NetSuite.\n- Influences downstream processes such as incremental sync triggers, data validation, and audit logging.\n- May be referenced by monitoring or alerting systems to detect synchronization delays or failures.\n\n**Technical details**\n- Stored as a string in ISO 8601 format with UTC timezone designator (e.g., \"YYYY-MM-DDTHH:mm:ssZ\").\n- Should be generated programmatically at the moment synchronization completes successfully"},"settings":{"type":"object","description":"settings: >\n  Configuration settings specific to the distributed module within the NetSuite integration, enabling fine-grained control over distributed processing behavior and performance optimization.\n  **Field behavior**\n  - Encapsulates a collection of key-value pairs representing various configuration parameters that govern the distributed NetSuite integration’s operation.\n  - Includes toggles (boolean flags), numeric thresholds, timeouts, batch sizes, logging options, and other customizable settings relevant to distributed processing workflows.\n  - Typically optional for basic usage but essential for advanced customization, performance tuning, and adapting the integration to specific deployment environments.\n  - Changes to these settings can dynamically alter the integration’s behavior, such as retry logic, concurrency limits, and error handling strategies.\n  **Implementation guidance**\n  - Define each setting with a clear, descriptive key name and an appropriate data type (e.g., integer, boolean, string).\n  - Validate input values rigorously to ensure they fall within acceptable ranges or conform to expected formats to prevent runtime errors.\n  - Provide sensible default values for all settings to maintain stable and predictable integration behavior when explicit configuration is absent.\n  - Document each setting comprehensively, including its purpose, valid values, default, and impact on the integration’s operation.\n  - Consider versioning or schema validation to manage changes in settings structure over time.\n  - Ensure that sensitive information is either excluded or securely handled if included within settings.\n  **Examples**\n  - `{ \"retryCount\": 3, \"enableLogging\": true, \"timeoutSeconds\": 120 }` — configures retry attempts, enables detailed logging, and sets operation timeout.\n  - `{ \"batchSize\": 50, \"useNonProduction\": false }` — sets the number of records processed per batch and specifies production environment usage.\n  - `{ \"maxConcurrentJobs\": 10, \"errorThreshold\": 5, \"logLevel\": \"DEBUG\" }` — limits concurrent jobs, sets error tolerance, and defines logging verbosity.\n  **Important notes**\n  - Modifications to settings may require restarting or reinitializing the integration service to apply changes effectively.\n  - Incorrect or suboptimal configuration can cause integration failures, data inconsistencies, or degraded performance.\n  - Avoid storing sensitive credentials or secrets in settings unless encrypted or otherwise secured.\n  - Settings should be managed carefully in multi-environment deployments to prevent configuration drift.\n  **Dependency chain**\n  - Dependent on the overall NetSuite integration configuration and"},"useSS2Framework":{"type":"boolean","description":"useSS2Framework indicates whether to utilize the SuiteScript 2.0 framework for the NetSuite distributed configuration, enabling modern scripting capabilities and modular architecture within the NetSuite environment.\n\n**Field behavior**\n- Determines if the SuiteScript 2.0 (SS2) framework is enabled for the NetSuite integration.\n- When set to true, the system uses SS2 APIs, modular script definitions, and updated scripting conventions.\n- When set to false or omitted, the system defaults to using SuiteScript 1.0 or legacy frameworks.\n- Influences script loading mechanisms, module resolution, and API compatibility within NetSuite.\n- Impacts debugging, deployment, and maintenance processes due to differences in framework structure.\n\n**Implementation guidance**\n- Set this property to true to leverage modern SuiteScript 2.0 features such as improved modularity, asynchronous processing, and enhanced performance.\n- Verify that all custom scripts, modules, and third-party integrations are fully compatible with SuiteScript 2.0 before enabling this flag.\n- Conduct thorough testing in a non-production or development environment to identify potential issues arising from framework changes.\n- Use this flag to facilitate gradual migration from legacy SuiteScript 1.0 to SuiteScript 2.0, allowing toggling between frameworks during transition phases.\n- Update deployment pipelines and CI/CD processes to accommodate SuiteScript 2.0 packaging and module formats.\n\n**Examples**\n- `useSS2Framework: true` — Enables SuiteScript 2.0 framework usage, activating modern scripting features.\n- `useSS2Framework: false` — Disables SuiteScript 2.0, falling back to legacy SuiteScript 1.0 framework.\n- Property omitted — Defaults to legacy SuiteScript framework (typically 1.0), maintaining backward compatibility.\n\n**Important notes**\n- Enabling the SS2 framework may require refactoring existing scripts to comply with SuiteScript 2.0 syntax, including the use of define/require for module loading.\n- Some legacy APIs, global objects, and modules available in SuiteScript 1.0 may be deprecated or behave differently in SuiteScript 2.0.\n- Performance improvements and new features in SS2 may not be realized if scripts are not properly adapted.\n- Ensure that all scheduled scripts, workflows, and integrations are reviewed for compatibility to prevent runtime errors.\n- Documentation and developer training may be necessary to fully leverage SuiteScript 2.0 capabilities.\n\n**Dependency chain**\n- Dep"},"frameworkVersion":{"type":"object","properties":{"type":{"type":"string","description":"type: The type property specifies the category or classification of the framework version within the NetSuite distributed system. It defines the nature or role of the framework version, such as whether it is a major release, minor update, patch, or experimental build. This classification is essential for managing version control, deployment strategies, and compatibility assessments across the distributed environment.\n\n**Field behavior**\n- Determines how the framework version is identified, categorized, and processed within the system.\n- Influences compatibility checks, update mechanisms, and deployment workflows.\n- Enables filtering, sorting, and selection of framework versions based on their type.\n- Affects automated decision-making processes such as rollback, promotion, or deprecation of versions.\n\n**Implementation guidance**\n- Use standardized, predefined values or enumerations to represent different types (e.g., \"major\", \"minor\", \"patch\", \"experimental\") to ensure consistency.\n- Maintain uniform naming conventions and case sensitivity across all framework versions.\n- Implement validation logic to restrict the type property to allowed categories, preventing invalid or unsupported entries.\n- Document any custom or extended types clearly to avoid ambiguity.\n- Ensure that changes to the type property trigger appropriate notifications or logging for audit purposes.\n\n**Examples**\n- \"major\" — indicating a significant release that introduces new features or breaking changes.\n- \"minor\" — representing smaller updates that add enhancements or non-breaking improvements.\n- \"patch\" — for releases focused on bug fixes, security patches, or minor corrections.\n- \"experimental\" — denoting versions under testing, development, or not intended for production use.\n- \"deprecated\" — marking versions that are no longer supported or recommended for use.\n\n**Important notes**\n- The type value directly impacts deployment strategies, including automated rollouts and rollback procedures.\n- Accurate and consistent typing is critical for automated update systems and dependency management tools to function correctly.\n- Changes to the type property should be documented thoroughly and communicated to all relevant stakeholders to avoid confusion.\n- Misclassification can lead to improper handling of versions, potentially causing system instability or incompatibility.\n- The property should be reviewed regularly to align with evolving release management policies.\n\n**Dependency chain**\n- Depends on the version numbering scheme and release management policies defined in the frameworkVersion.\n- Interacts with deployment, update, and compatibility modules that rely on the type classification to determine appropriate actions.\n- Influences compatibility checks with other components and services within the NetSuite distributed environment.\n- May affect logging, monitoring, and alert"},"enum":{"type":"array","items":{"type":"object"},"description":"A list of predefined string values that represent the allowed versions of the framework in the NetSuite distributed environment. This enumeration restricts the frameworkVersion property to accept only specific, valid version identifiers, ensuring consistency and preventing invalid version usage across configurations and API interactions.\n\n**Field behavior**\n- Defines the complete set of permissible values for the frameworkVersion property.\n- Enforces validation by restricting inputs to only those versions listed in the enum.\n- Facilitates consistent version management across different components and services.\n- Typically implemented as an array of strings, where each string corresponds to a valid framework version identifier.\n- Serves as a source of truth for supported framework versions in the system.\n\n**Implementation guidance**\n- Populate the enum with all currently supported framework version strings, reflecting official releases.\n- Regularly update the enum to add new versions and deprecate obsolete ones in alignment with release cycles.\n- Use the enum to validate user inputs, API requests, and configuration files to prevent invalid or unsupported versions.\n- Integrate the enum values into UI elements such as dropdown menus or selection lists to guide users in choosing valid versions.\n- Ensure synchronization between the enum values and the system’s version recognition logic to avoid discrepancies.\n- Document any changes to the enum clearly to inform developers and users about version support updates.\n\n**Examples**\n- [\"1.0.0\", \"1.1.0\", \"2.0.0\"]\n- [\"v2023.1\", \"v2023.2\", \"v2024.1\"]\n- [\"stable\", \"beta\", \"alpha\"]\n- [\"release-2023Q2\", \"release-2023Q3\", \"release-2024Q1\"]\n\n**Important notes**\n- Enum values must exactly match the version identifiers recognized by the system, including case sensitivity.\n- Modifications to the enum (adding/removing versions) should be performed carefully to maintain backward compatibility.\n- The enum itself does not specify a default version; default version handling should be managed separately in the system.\n- Consistency in formatting and naming conventions of version strings within the enum is critical to avoid confusion.\n- The enum should be treated as authoritative for validation purposes and not overridden by external inputs.\n\n**Dependency chain**\n- Used by the frameworkVersion property to restrict allowed values.\n- Relied upon by validation logic in APIs and configuration parsers.\n- Integrated with UI components for version selection.\n- Maintained in coordination with the system’s version management"},"lowercase":{"type":"object","description":"lowercase: Specifies whether the framework version string should be converted to lowercase characters to ensure consistent casing across outputs and integrations.\n\n**Field behavior**\n- When set to `true`, the framework version string is converted entirely to lowercase characters.\n- When set to `false` or omitted, the framework version string retains its original casing as provided.\n- Influences how the framework version is displayed in logs, API responses, configuration files, or any output where the version string is used.\n- Does not modify the content or structure of the version string, only its letter casing.\n\n**Implementation guidance**\n- Accept only boolean values (`true` or `false`) for this property.\n- Perform the lowercase transformation after the framework version string is generated or retrieved but before it is output, stored, or transmitted.\n- Default behavior should be to preserve the original casing if this property is not explicitly set.\n- Use this property to maintain consistency in environments where case sensitivity affects processing or comparison of version strings.\n- Ensure that any caching or storage mechanisms reflect the transformed casing if this property is enabled.\n\n**Examples**\n- `true` — The framework version `\"V1.2.3\"` becomes `\"v1.2.3\"`.\n- `true` — The framework version `\"v1.2.3\"` remains `\"v1.2.3\"` (already lowercase).\n- `false` — The framework version `\"V1.2.3\"` remains `\"V1.2.3\"`.\n- Property omitted — The framework version string is output exactly as originally provided.\n\n**Important notes**\n- This property only affects letter casing; it does not alter the version string’s format, numeric values, or other characters.\n- Downstream systems or integrations that consume the version string should be verified to handle the casing appropriately.\n- Changing the casing may impact string equality checks or version comparisons if those are case-sensitive.\n- Consider the implications on logging, monitoring, or auditing systems that may rely on exact version string matches.\n\n**Dependency chain**\n- Depends on the presence of a valid framework version string to apply the transformation.\n- Should be evaluated after the framework version is fully constructed or retrieved.\n- May interact with other formatting or normalization properties related to the framework version.\n\n**Technical details**\n- Implemented as a boolean flag within the `frameworkVersion` configuration object.\n- Transformation typically uses standard string lowercase functions provided by the programming environment.\n- Should be applied consistently across"}},"description":"frameworkVersion: The specific version identifier of the software framework used within the NetSuite distributed environment. This version string is essential for tracking the exact iteration of the framework deployed, performing compatibility checks between distributed components, and ensuring consistency across the system. It typically follows semantic versioning or a similar structured versioning scheme to convey major, minor, and patch-level changes, including pre-release or build metadata when applicable.\n\n**Field behavior**\n- Represents the precise version of the software framework currently in use within the distributed environment.\n- Serves as a key reference for verifying compatibility between different components and services.\n- Facilitates debugging, support, and audit processes by clearly identifying the framework iteration.\n- Typically adheres to semantic versioning (e.g., MAJOR.MINOR.PATCH) or a comparable versioning format.\n- Remains stable and immutable once a deployment is finalized to ensure traceability.\n\n**Implementation guidance**\n- Maintain a consistent and standardized version string format, such as \"1.2.3\", \"v2.0.0\", or date-based versions like \"2024.06.01\".\n- Update this property promptly whenever the framework undergoes upgrades, patches, or significant changes.\n- Validate the version string against a predefined list of supported or recognized framework versions to prevent errors.\n- Integrate this property into deployment automation, monitoring, and logging tools to verify correct framework usage.\n- Avoid modifying the frameworkVersion post-deployment to maintain historical accuracy and supportability.\n\n**Examples**\n- \"1.0.0\"\n- \"2.3.5\"\n- \"v3.1.0-beta\"\n- \"2024.06.01\"\n- \"1.4.0-rc1\"\n\n**Important notes**\n- Accurate frameworkVersion values are critical to prevent compatibility issues and runtime failures in distributed systems.\n- Missing, incorrect, or inconsistent version identifiers can lead to deployment errors, integration problems, or difficult-to-trace bugs.\n- This property should be treated as a source of truth for framework versioning within the NetSuite distributed environment.\n- Coordination with other versioning properties (e.g., applicationVersion, apiVersion) is important for holistic version management.\n\n**Dependency chain**\n- Dependent on the overarching NetSuite distributed system versioning and release management strategy.\n- Closely related to other versioning properties such as applicationVersion and apiVersion for comprehensive compatibility checks.\n- Influences deployment pipelines, runtime environment validations, and compatibility enforcement"}},"description":"Indicates whether the transaction or record is distributed across multiple departments, locations, classes, or subsidiaries within the NetSuite system, allowing for detailed allocation of amounts or quantities for financial tracking and reporting purposes.\n\n**Field behavior**\n- Specifies if the transaction’s amounts or quantities are allocated across multiple organizational segments such as departments, locations, classes, or subsidiaries.\n- When set to `true`, the transaction supports detailed distribution, enabling granular financial analysis and reporting.\n- When set to `false` or omitted, the transaction is treated as assigned to a single segment without any distribution.\n- Influences how the transaction data is processed, posted, and reported within NetSuite’s financial modules.\n\n**Implementation guidance**\n- Use this boolean field to indicate whether a transaction involves distributed allocations.\n- When `distributed` is `true`, ensure that corresponding distribution details (e.g., department, location, class, subsidiary allocations) are provided in related fields to fully define the distribution.\n- Validate that the sum of all distributed amounts or quantities equals the total transaction amount to maintain data integrity.\n- Confirm that the specific NetSuite record type supports distribution before setting this field to `true`.\n- Handle this field carefully in integrations to avoid discrepancies in accounting or reporting.\n\n**Examples**\n- `distributed: true` — The transaction amounts are allocated across multiple departments and locations.\n- `distributed: false` — The transaction is assigned to a single department without any distribution.\n- Omitted `distributed` field — Defaults to non-distributed transaction behavior.\n\n**Important notes**\n- Enabling distribution (`distributed: true`) often requires additional detailed data to specify how amounts are allocated.\n- Not all transaction or record types in NetSuite support distribution; verify compatibility beforehand.\n- Incorrect or incomplete distribution data can lead to accounting errors or integration failures.\n- Distribution affects financial reporting and posting; ensure consistency across related fields.\n\n**Dependency chain**\n- Commonly used alongside fields specifying distribution details such as `department`, `location`, `class`, and `subsidiary`.\n- May impact related posting, reporting, and reconciliation processes within NetSuite’s financial modules.\n- Dependent on the transaction type’s capability to support distributed allocations.\n\n**Technical details**\n- Data type: Boolean (`true` or `false`).\n- Default behavior when omitted is typically `false` (non-distributed).\n- Must be synchronized with distribution detail records to ensure accurate financial data.\n- Changes to this field may trigger validation or"},"getList":{"type":"object","properties":{"type":{"type":"array","items":{"type":"object"},"description":"type: >\n  Specifies the category or classification of the records to be retrieved from the NetSuite system.\n  This property determines the type of entities that the getList operation will query and return.\n  It defines the scope of the data retrieval by indicating which NetSuite record type the API should target.\n  **Field behavior**\n  - Defines the specific record type to fetch, such as customers, transactions, or items.\n  - Influences the structure, fields, and format of the returned data based on the selected record type.\n  - Must be set to a valid NetSuite record type identifier recognized by the API.\n  - Directly impacts the filtering, sorting, and pagination capabilities available for the query.\n  **Implementation guidance**\n  - Use predefined constants or enumerations representing NetSuite record types to avoid errors and ensure consistency.\n  - Validate the type value before making the API call to confirm it corresponds to a supported and accessible record type.\n  - Consider user permissions and roles associated with the record type to ensure the API caller has appropriate access rights.\n  - Review NetSuite documentation for the exact record type identifiers and their expected behaviors.\n  - When possible, test with sample queries to verify the returned data matches expectations for the specified type.\n  **Examples**\n  - \"customer\"\n  - \"salesOrder\"\n  - \"inventoryItem\"\n  - \"employee\"\n  - \"vendor\"\n  - \"purchaseOrder\"\n  **Important notes**\n  - Incorrect or unsupported type values will result in API errors, empty responses, or unexpected data structures.\n  - The type property directly affects query performance, response size, and the complexity of the returned data.\n  - Some record types may require additional filters, parameters, or specific permissions to retrieve meaningful or complete data.\n  - Changes in NetSuite schema or API versions may introduce new record types or deprecate existing ones; keep the type values up to date.\n  **Dependency chain**\n  - The 'type' property is a required input for the NetSuite.getList operation.\n  - The value of 'type' determines the schema, fields, and structure of the records returned in the response.\n  - Other properties, filters, or parameters in the getList operation may depend on or vary according to the specified 'type'.\n  - Validation and error handling mechanisms rely on the correctness of the 'type' value.\n  **Technical details**\n  - Accepts string values corresponding"},"typeId":{"type":"string","description":"typeId: >\n  The unique identifier representing the specific type of record or entity to be retrieved in the NetSuite getList operation.\n  **Field behavior**\n  - Specifies the category or type of records to fetch from NetSuite.\n  - Determines the schema and fields available in the returned records.\n  - Must correspond to a valid NetSuite record type identifier.\n  **Implementation guidance**\n  - Use predefined NetSuite record type IDs as per NetSuite documentation.\n  - Validate the typeId before making the API call to avoid errors.\n  - Ensure the typeId aligns with the permissions and roles of the API user.\n  **Examples**\n  - \"customer\" for customer records.\n  - \"salesOrder\" for sales order records.\n  - \"employee\" for employee records.\n  **Important notes**\n  - Incorrect or unsupported typeId values will result in API errors.\n  - The typeId is case-sensitive and must match NetSuite's expected values.\n  - Changes in NetSuite's API or record types may affect valid typeId values.\n  **Dependency chain**\n  - Depends on the NetSuite record types supported by the account.\n  - Influences the structure and content of the getList response.\n  **Technical details**\n  - Typically a string value representing the internal NetSuite record type.\n  - Used as a parameter in the getList API endpoint to filter records.\n  - Must be URL-encoded if used in query parameters."},"internalId":{"type":"string","description":"Unique identifier assigned internally to an entity within the NetSuite system. This identifier is used to reference and retrieve specific records programmatically.\n\n**Field behavior**\n- Serves as the primary key for identifying records in NetSuite.\n- Used in API calls to fetch, update, or delete specific records.\n- Immutable once assigned to a record.\n- Typically a numeric or alphanumeric string.\n\n**Implementation guidance**\n- Must be provided when performing operations that require precise record identification.\n- Should be validated to ensure it corresponds to an existing record before use.\n- Avoid exposing internalId values in public-facing contexts to maintain data security.\n- Use in conjunction with other identifiers if necessary to ensure correct record targeting.\n\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"A1B2C3\"\n\n**Important notes**\n- internalId is unique within the scope of the record type.\n- Different record types may have overlapping internalId values; always confirm the record type context.\n- Not user-editable; assigned by NetSuite upon record creation.\n- Essential for batch operations where multiple records are processed by their internalIds.\n\n**Dependency chain**\n- Dependent on the record type being accessed or manipulated.\n- Used alongside other API parameters such as record type or externalId for comprehensive identification.\n\n**Technical details**\n- Data type: string (often numeric but can include alphanumeric characters).\n- Read-only from the API consumer perspective.\n- Returned in API responses when querying records.\n- Used as a key parameter in getList, get, update, and delete API operations."},"externalId":{"type":"string","description":"A unique identifier assigned to an entity or record by an external system, used to reference or synchronize data between systems.\n\n**Field behavior**\n- Serves as a unique key to identify records originating outside the current system.\n- Used to retrieve, update, or synchronize records with external systems.\n- Typically immutable once assigned to maintain consistent references.\n- May be optional or required depending on the integration context.\n\n**Implementation guidance**\n- Ensure the externalId is unique within the scope of the external system.\n- Validate the format and length according to the external system’s specifications.\n- Use this field to map or link records between the local system and external sources.\n- Handle cases where the externalId might be missing or duplicated gracefully.\n- Document the source system and context for the externalId to avoid ambiguity.\n\n**Examples**\n- \"INV-12345\" (Invoice number from an accounting system)\n- \"CRM-987654321\" (Customer ID from a CRM platform)\n- \"EXT-USER-001\" (User identifier from an external user management system)\n\n**Important notes**\n- The externalId is distinct from the internal system’s primary key or record ID.\n- Changes to the externalId can disrupt synchronization and should be avoided.\n- When integrating multiple external systems, ensure externalIds are namespaced or otherwise differentiated.\n- Not all records may have an externalId if they originate solely within the local system.\n\n**Dependency chain**\n- Often used in conjunction with other identifiers like internal IDs or system-specific keys.\n- May depend on authentication or authorization to access external system data.\n- Relies on consistent data synchronization processes to maintain accuracy.\n\n**Technical details**\n- Typically represented as a string data type.\n- May include alphanumeric characters, dashes, or underscores.\n- Should be indexed in databases for efficient lookup.\n- May require encoding or escaping if used in URLs or queries."},"_id":{"type":"object","description":"_id: The unique identifier for a record within the NetSuite system. This identifier is used to retrieve, update, or reference specific records in API operations.\n**Field behavior**\n- Serves as the primary key for records in NetSuite.\n- Must be unique within the context of the record type.\n- Used to fetch or manipulate specific records via API calls.\n- Immutable once assigned to a record.\n**Implementation guidance**\n- Ensure the _id is correctly captured from NetSuite responses when retrieving records.\n- Validate the _id format as per NetSuite’s specifications before using it in requests.\n- Use the _id to perform precise operations such as updates or deletions.\n- Handle cases where the _id may not be present or is invalid gracefully.\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"987654321\"\n**Important notes**\n- The _id is critical for identifying records uniquely; incorrect usage can lead to data inconsistencies.\n- Do not generate or alter _id values manually; always use those provided by NetSuite.\n- The _id is typically a numeric string but confirm with the specific NetSuite record type.\n**Dependency chain**\n- Dependent on the record type being accessed or manipulated.\n- Used in conjunction with other record fields for comprehensive data operations.\n**Technical details**\n- Typically represented as a string or numeric value.\n- Returned in API responses when listing or retrieving records.\n- Required in API requests for operations targeting specific records."}},"description":"getList: Retrieves a list of records from the NetSuite system based on specified criteria and parameters. This operation enables fetching multiple records in a single request, optimizing data retrieval and processing efficiency. It supports filtering, sorting, and pagination to manage large datasets effectively, and returns both the matching records and relevant metadata about the query results.\n\n**Field behavior**\n- Accepts parameters defining the record type to retrieve, along with optional filters, search criteria, and sorting options.\n- Returns a collection (list) of records that match the specified criteria.\n- Supports pagination by allowing clients to specify limits and offsets or use tokens to navigate through large result sets.\n- Includes metadata such as total record count, current page number, and page size to facilitate client-side data handling.\n- May return partial data if the dataset exceeds the maximum allowed records per request.\n- Handles cases where no records match the criteria by returning an empty list with appropriate metadata.\n\n**Implementation guidance**\n- Require explicit specification of the record type to ensure accurate data retrieval.\n- Validate and sanitize all input parameters (filters, sorting, pagination) to prevent malformed queries and optimize performance.\n- Implement robust pagination logic to allow clients to retrieve subsequent pages seamlessly.\n- Provide clear error messages and status codes for scenarios such as invalid parameters, unauthorized access, or record type not found.\n- Ensure compliance with NetSuite API rate limits and handle throttling gracefully.\n- Support common filter operators (e.g., equals, contains, greater than) consistent with NetSuite’s search capabilities.\n- Return consistent and well-structured response formats to facilitate client parsing and integration.\n\n**Examples**\n- Retrieving a list of customer records filtered by status (e.g., Active) and creation date range.\n- Fetching a batch of sales orders placed within a specific date range, sorted by order date descending.\n- Obtaining a paginated list of inventory items filtered by category and availability status.\n- Requesting the first 50 employee records with a specific job title, then fetching subsequent pages as needed.\n- Searching for vendor records containing a specific keyword in their name or description.\n\n**Important notes**\n- The maximum number of records returned per request is subject to NetSuite API limits, which may require multiple paginated requests for large datasets.\n- Proper authentication and authorization are mandatory to access the requested records; insufficient permissions will result in access errors.\n- The structure and fields of the returned records vary depending on the specified record type and the fields requested or defaulted"},"searchPreferences":{"type":"object","properties":{"bodyFieldsOnly":{"type":"boolean","description":"bodyFieldsOnly indicates whether the search results should include only the body fields of the records, excluding any joined or related record fields. This setting controls the scope of data returned by the search operation, allowing for more focused and efficient retrieval when only the main record's fields are necessary.\n\n**Field behavior**\n- When set to true, search results will include only the fields that belong directly to the main record (body fields), excluding any fields from joined or related records.\n- When set to false or omitted, search results may include fields from both the main record and any joined or related records specified in the search.\n- Directly affects the volume and detail of data returned, potentially reducing payload size and improving performance.\n- Influences how the search engine processes and compiles the result set, limiting it to primary record data when enabled.\n\n**Implementation guidance**\n- Use this property to optimize search performance and reduce data transfer when only the main record's fields are required.\n- Set to true to minimize payload size, which is beneficial for large datasets or bandwidth-sensitive environments.\n- Verify that your search criteria and downstream processing do not require any joined or related record fields before enabling this option.\n- If joined fields are necessary for your application logic, keep this property false or unset to ensure complete data retrieval.\n- Consider this setting in conjunction with other search preferences like pageSize and returnSearchColumns for optimal results.\n\n**Examples**\n- `bodyFieldsOnly: true` — returns only the main record’s body fields in search results, excluding any joined record fields.\n- `bodyFieldsOnly: false` — returns both body fields and fields from joined or related records as specified in the search.\n- Omitted `bodyFieldsOnly` property — defaults to false behavior, including joined fields if requested.\n\n**Important notes**\n- Enabling bodyFieldsOnly may omit critical related data if your search logic depends on joined fields, potentially impacting application functionality.\n- This setting is particularly useful for improving performance and reducing data size in scenarios where joined data is unnecessary.\n- Not all record types or search operations may support this preference; verify compatibility with your specific use case.\n- Changes to this setting can affect the structure and completeness of search results, so test thoroughly when modifying.\n\n**Dependency chain**\n- This property is part of the `searchPreferences` object within the NetSuite API request.\n- It influences the fields returned by the search operation, affecting both data scope and payload size.\n- May"},"pageSize":{"type":"number","description":"pageSize specifies the number of search results to be returned per page in a paginated search response. This property controls the size of each page of results when performing searches, enabling efficient handling and retrieval of large datasets by dividing them into manageable chunks. By adjusting pageSize, clients can balance between the volume of data received per request and the performance implications of processing large result sets.\n\n**Field behavior**\n- Determines the maximum number of records returned in a single page of search results.\n- Directly influences the pagination mechanism by setting how many items appear on each page.\n- Helps optimize network and client performance by limiting the amount of data transferred and processed per response.\n- Affects the total number of pages available, calculated based on the total number of search results divided by pageSize.\n- When pageSize is changed between requests, it may affect the consistency of pagination navigation.\n\n**Implementation guidance**\n- Assign pageSize a positive integer value that balances response payload size and system performance.\n- Ensure the value respects any minimum and maximum limits imposed by the API or backend system.\n- Maintain consistent pageSize values across paginated requests to provide predictable and stable navigation through result pages.\n- Consider client device capabilities, network bandwidth, and expected user interaction patterns when selecting pageSize.\n- Implement validation to prevent invalid or out-of-range values that could cause errors or degraded performance.\n- When dealing with very large datasets, consider smaller pageSize values to reduce memory consumption and improve responsiveness.\n\n**Examples**\n- pageSize: 25 — returns 25 search results per page, suitable for standard list views.\n- pageSize: 100 — returns 100 search results per page, useful for bulk data processing or export scenarios.\n- pageSize: 10 — returns 10 search results per page, ideal for quick previews or limited bandwidth environments.\n- pageSize: 50 — a moderate setting balancing data volume and performance for typical use cases.\n\n**Important notes**\n- Excessively large pageSize values can increase response times, memory usage, and may lead to timeouts or throttling.\n- Very small pageSize values can cause a high number of API calls, increasing overall latency and server load.\n- The API may enforce maximum allowable pageSize limits; requests exceeding these limits may result in errors or automatic truncation.\n- Changing pageSize mid-pagination can disrupt user experience by altering the number of pages and item offsets.\n- Some APIs may have default pageSize values if none is specified; explicitly setting page"},"returnSearchColumns":{"type":"boolean","description":"returnSearchColumns: Specifies whether the search operation should return the columns (fields) defined in the search results, providing detailed data for each record matching the search criteria.\n\n**Field behavior**\n- Determines if the search response includes the columns specified in the search definition, such as field values and metadata.\n- When set to true, the search results will contain detailed column data for each record, enabling comprehensive data retrieval.\n- When set to false, the search results will omit column data, potentially returning only record identifiers or minimal information.\n- Directly influences the amount of data returned, impacting response payload size and processing time.\n- Affects how client applications can utilize the search results, depending on the presence or absence of column data.\n\n**Implementation guidance**\n- Set to true when detailed search result data is required for processing, reporting, or display purposes.\n- Set to false to optimize performance and reduce bandwidth usage when only record IDs or minimal data are needed.\n- Use in conjunction with other search preference settings (e.g., `pageSize`, `returnSearchRows`) to fine-tune search responses.\n- Ensure client applications are designed to handle both scenarios—presence or absence of column data—to avoid errors or incomplete processing.\n- Consider the trade-off between data completeness and performance when configuring this property.\n\n**Examples**\n- `returnSearchColumns: true` — The search results will include all defined columns for each record, such as names, dates, and custom fields.\n- `returnSearchColumns: false` — The search results will exclude column data, returning only basic record information like internal IDs.\n\n**Important notes**\n- Enabling returnSearchColumns may significantly increase response size and processing time, especially for searches returning many records or columns.\n- Some search operations or API endpoints may require columns to be returned to function correctly or to provide meaningful results.\n- Disabling this option can improve performance but limits the detail available in search results, which may affect downstream processing or user interfaces.\n- Changes to this setting can impact caching, pagination, and sorting behaviors depending on the search implementation.\n\n**Dependency chain**\n- Related to other `searchPreferences` properties such as `pageSize` (controls number of records per page) and `returnSearchRows` (controls whether search rows are returned).\n- Works in tandem with search definition settings that specify which columns are included in the search.\n- May affect or be affected by API-level configurations or limitations on data retrieval and response formatting."}},"description":"searchPreferences: Preferences that control the behavior and parameters of search operations within the NetSuite environment, enabling customization of how search queries are executed and how results are returned to optimize relevance, performance, and user experience.\n\n**Field behavior**\n- Defines the execution parameters for search queries, including pagination, sorting, filtering, and result formatting.\n- Controls the scope, depth, and granularity of data retrieved during search operations.\n- Influences the performance, accuracy, and relevance of search results based on configured preferences.\n- Can be adjusted dynamically to tailor search behavior to specific user roles, contexts, or application requirements.\n- May include settings such as page size limits, sorting criteria, case sensitivity, and filter application.\n\n**Implementation guidance**\n- Utilize this property to fine-tune search operations to meet specific user or application needs, improving efficiency and relevance.\n- Validate all preference values against supported NetSuite search parameters to prevent errors or unexpected behavior.\n- Establish sensible default preferences to ensure consistent and predictable search results when explicit preferences are not provided.\n- Allow dynamic updates to preferences to adapt to changing contexts, such as different user roles or data volumes.\n- Ensure that preference configurations comply with user permissions and role-based access controls to maintain security and data integrity.\n\n**Examples**\n- Setting a page size of 50 to limit the number of records returned per search query for better performance.\n- Enabling case-insensitive search filters to broaden result matching.\n- Specifying sorting order by transaction date in descending order to show the most recent records first.\n- Applying filters to restrict search results to a particular customer segment or date range.\n- Configuring search to exclude inactive records to streamline results.\n\n**Important notes**\n- Misconfiguration of searchPreferences can lead to incomplete, irrelevant, or inefficient search results, negatively impacting user experience.\n- Certain preferences may be restricted or overridden based on user roles, permissions, or API version constraints.\n- Changes to searchPreferences can affect system performance; excessive page sizes or complex filters may increase load times.\n- Always verify compatibility of preference settings with the specific NetSuite API version and environment in use.\n- Consider the impact of preferences on downstream processes that consume search results.\n\n**Dependency chain**\n- Depends on the overall search operation configuration and the specific search type being performed.\n- Interacts with user authentication and authorization settings to enforce access controls on search results.\n- Influences and is influenced by data retrieval mechanisms and indexing strategies within NetSuite.\n- Works in conjunction with"},"file":{"type":"object","description":"Configuration for retrieving files from NetSuite file cabinet and PARSING them into records. Use this for structured file exports (CSV, XML, JSON) where the file content should be parsed into data records.\n\n**Critical:** When to use file vs blob\n- Use `netsuite.file` WITH export `type: null/undefined` for file exports WITH parsing (CSV, XML, JSON)\n- Use `netsuite.blob` WITH export `type: \"blob\"` for raw binary transfers WITHOUT parsing\n\nWhen you want file content to be parsed into individual records, use this `file` configuration and leave the export's `type` field as null or undefined (standard export). Do NOT set `type: \"blob\"` when using this configuration.","properties":{"folderInternalId":{"type":"string","description":"The internal ID of the NetSuite File Cabinet folder from which files will be exported.\n\nSpecify the internal ID for the NetSuite File Cabinet folder from which you want to export your files. If the folder internal ID is required to be dynamic based on the data you are integrating, you can specify the JSON path to the field in your data containing the folder internal ID values instead. For example, {{{myFileField.fileName}}}.\n\n**Field behavior**\n- Identifies the specific folder in NetSuite's file cabinet to export files from\n- Must be a valid internal ID that exists in the NetSuite environment\n- Supports dynamic values using handlebars notation for data-driven folder selection\n- The internal ID is distinct from folder names or paths; it's a stable numeric identifier\n\n**Implementation guidance**\n- Obtain the folderInternalId via NetSuite's UI (File Cabinet > folder properties) or API\n- For static exports, use the numeric internal ID directly (e.g., \"12345\")\n- For dynamic exports, use handlebars syntax to reference a field in your data\n- Verify folder permissions - the integration user must have access to the folder\n- The folderInternalId is distinct from folder names or folder paths; it is a stable numeric identifier used internally by NetSuite.\n- Using an incorrect or non-existent folderInternalId will result in errors or unintended file placement.\n- Folder permissions and user roles can impact the ability to perform operations even with a valid folderInternalId.\n- Folder hierarchy changes do not affect the folderInternalId, ensuring persistent reference integrity.\n\n**Dependency chain**\n- Depends on the existence of the folder within the NetSuite file cabinet.\n- Requires appropriate user permissions to access or modify the folder.\n- Often used in conjunction with file identifiers and other file metadata fields.\n- May be linked to folder creation or folder search operations to retrieve valid IDs.\n\n**Examples**\n- \"12345\" - Static folder internal ID\n- \"67890\" - Another valid folder internal ID\n- \"{{{record.folderId}}}\" - Dynamic folder ID from integration data\n- \"{{{myFileField.fileName}}}\" - Dynamic value from a field in your data\n\n**Important notes**\n- Using an incorrect or non-existent folderInternalId will result in errors\n- Folder hierarchy changes do not affect the folderInternalId\n- Internal IDs may differ between non-production and production environments"},"backupFolderInternalId":{"type":"string","description":"backupFolderInternalId is the internal identifier of the backup folder within the NetSuite file cabinet where backup files are stored. This ID uniquely identifies the folder location used for saving backup files programmatically, ensuring that backup operations target the correct directory within the NetSuite environment.\n\n**Field behavior**\n- Represents a unique internal ID assigned by NetSuite to a specific folder in the file cabinet.\n- Directs backup operations to the designated folder location for storing backup files.\n- Must correspond to an existing and accessible folder within the NetSuite file cabinet.\n- Typically handled as an integer value in API requests and responses.\n- Immutable for a given folder; changing the folder requires updating this ID accordingly.\n\n**Implementation guidance**\n- Verify that the folder with this internal ID exists before initiating backup operations.\n- Confirm that the folder has the necessary permissions to allow writing and managing backup files.\n- Use NetSuite SuiteScript APIs or REST API calls to retrieve and validate folder internal IDs dynamically.\n- Avoid hardcoding the internal ID; instead, use configuration files, environment variables, or administrative settings to maintain flexibility across environments.\n- Implement error handling to manage cases where the folder ID is invalid, missing, or inaccessible.\n- Consider environment-specific IDs for non-production versus production to prevent misdirected backups.\n\n**Examples**\n- 12345\n- 67890\n- 112233\n\n**Important notes**\n- The internal ID is unique per NetSuite account and environment; IDs do not transfer between non-production and production.\n- Deleting or renaming the folder associated with this ID will disrupt backup processes until updated.\n- Proper access rights and permissions are mandatory to write backup files to the specified folder.\n- Changes to folder structure or permissions should be coordinated with backup scheduling to avoid failures.\n- This property is critical for ensuring backup data integrity and recoverability within NetSuite.\n\n**Dependency chain**\n- Depends on the existence and accessibility of the folder in the NetSuite file cabinet.\n- Interacts with backup scheduling, file naming conventions, and storage management properties.\n- May be linked with authentication and authorization mechanisms controlling file cabinet access.\n- Relies on NetSuite API capabilities to manage and reference file cabinet folders.\n\n**Technical details**\n- Data type: Integer (typically a 32-bit integer).\n- Represents the internal NetSuite folder ID, not the folder name or path.\n- Used in API payloads to specify backup destination folder.\n- Must be retrieved or confirmed via NetSuite SuiteScript"}}}},"required":[]},"RDBMS":{"type":"object","description":"Configuration object for Relational Database Management System (RDBMS) data integration exports.\n\nThis object is REQUIRED when the _connectionId field references an RDBMS database connection\nand must not be included for other connection types. It defines how data is extracted\nfrom relational databases using SQL queries.\n\n**Rdbms export capabilities**\n- Execute custom SQL SELECT statements\n- Support for joins, aggregations, and functions\n- Flexible data retrieval from any accessible tables or views\n- Compatible with all major database systems\n\n**Critical:** WHAT BELONGS IN THIS OBJECT\n- `query` - **ALWAYS REQUIRED** - The SQL SELECT statement\n- `once` - **REQUIRED** when the export's Object Type is `\"once\"` (set _include_once: true)\n- **DO NOT** put `delta` inside this object - delta is handled via the query\n\n**Delta exports (type: \"delta\")**\nFor delta/incremental exports, do NOT populate a `delta` object inside `rdbms`.\nInstead, use `{{lastExportDateTime}}` or `{{currentExportDateTime}}` directly in the query:\n```json\n{\n  \"type\": \"delta\",\n  \"rdbms\": {\n    \"query\": \"SELECT * FROM customers WHERE updatedAt > {{lastExportDateTime}}\"\n  }\n}\n```\n\n**Once exports (type: \"once\")**\nFor once exports (mark records as processed), populate `rdbms.once.query`:\n```json\n{\n  \"type\": \"once\",\n  \"rdbms\": {\n    \"query\": \"SELECT * FROM customers WHERE exported = false\",\n    \"once\": {\n      \"query\": \"UPDATE customers SET exported = true WHERE id = {{record.id}}\"\n    }\n  }\n}\n```\n\n**Standard exports (type: null or not specified)**\nJust provide the query:\n```json\n{\n  \"rdbms\": {\n    \"query\": \"SELECT * FROM customers WHERE status = 'ACTIVE'\"\n  }\n}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - SQL SELECT query for retrieving data from the database.\n\nThis field contains the SQL SELECT statement that will be executed to fetch data\nfrom the database. The query can range from simple table selections to complex\njoins and aggregations.\n\nExamples:\n- Basic: `SELECT id, name, email FROM customers WHERE status = 'ACTIVE'`\n- Join: `SELECT o.id, c.name, o.amount FROM orders o JOIN customers c ON o.customer_id = c.id`\n- Aggregate: `SELECT category, COUNT(*) as count FROM orders GROUP BY category`\n- Parameterized: `SELECT * FROM orders WHERE customer_id = {{record.customer_id}}`\n"},"once":{"type":"object","description":"**CRITICAL: REQUIRED when the export's Object Type is \"once\".**\n\nIf Object Type is \"once\", you MUST set _include_once to true (or include this object).\n\nThis object has ONLY ONE property: \"query\" (a SQL UPDATE string).\nDO NOT create any other properties like \"update\", \"table\", \"set\", \"where\", etc.\n\nCORRECT format:\n```json\n{\"query\": \"UPDATE customers SET exported=true WHERE id={{record.id}}\"}\n```\n\nWRONG format (DO NOT DO THIS):\n```json\n{\"update\": {\"table\": \"customers\", \"set\": {...}}}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - A SQL UPDATE statement string to mark records as processed.\n\nThis is a plain SQL UPDATE query string, NOT a structured object.\n\nThe query runs AFTER each record is successfully exported, setting a flag\nto indicate the record has been processed.\n\nFormat: \"UPDATE <table> SET <column>=<value> WHERE <id_column>={{record.<id_field>}}\"\n\nExample: \"UPDATE customers SET exported=true WHERE id={{record.id}}\"\n\nThe {{record.id}} placeholder is replaced with the actual record ID from each exported row.\n"}}}}},"S3":{"type":"object","description":"Configuration object for Amazon S3 (Simple Storage Service) data integration exports.\n\nThis object is REQUIRED when the _connectionId field references an AWS S3 connection\nand must not be included for other connection types. It defines how files are retrieved\nfrom S3 buckets for processing in integrations.\n\nThe S3 export object has the following requirements:\n\n- Required fields: region, bucket\n- Optional fields: keyStartsWith, keyEndsWith, backupBucket, keyPrefix\n\n**Purpose**\n\nThis configuration specifies:\n- Which S3 bucket to retrieve files from\n- How to filter files by key patterns\n- Where to move files after retrieval (optional)\n","properties":{"region":{"type":"string","description":"The AWS region where the S3 bucket is located.\n\n- REQUIRED for all S3 exports\n- Must be a valid AWS region identifier (e.g., us-east-1, eu-west-1)\n- Case-insensitive (will be normalized to lowercase)\n"},"bucket":{"type":"string","description":"The S3 bucket name to retrieve files from.\n\n- REQUIRED for all S3 exports\n- Must be a valid existing S3 bucket name\n- Globally unique across all AWS accounts\n- AWS credentials must have s3:ListBucket and s3:GetObject permissions\n"},"keyStartsWith":{"type":"string","description":"Optional prefix filter for S3 object keys.\n\n- Filters files based on the beginning of their keys\n- Functions as a directory path in S3's flat storage structure\n- Case-sensitive (S3 keys are case-sensitive)\n- Examples:\n  - `\"exports/\"` - retrieves files in the exports \"directory\"\n  - `\"customer/orders/2023/\"` - retrieves files in this nested path\n  - `\"invoice_\"` - retrieves files starting with \"invoice_\"\n\nWhen used with keyEndsWith, files must match both criteria.\n"},"keyEndsWith":{"type":"string","description":"Optional suffix filter for S3 object keys.\n\n- Commonly used to filter by file extension\n- Case-sensitive (S3 keys are case-sensitive)\n- Examples:\n  - `\".csv\"` - retrieves only CSV files\n  - `\"_FINAL.xml\"` - retrieves only XML files with _FINAL suffix\n  - `\"_READY\"` - retrieves files with status indicator\n\nWhen used with keyStartsWith, files must match both criteria.\n"},"backupBucket":{"type":"string","description":"Optional destination bucket where files are moved before deletion.\n\n- If omitted, files are deleted from the source bucket after successful export\n- Must be a valid existing S3 bucket in the same region\n- AWS credentials must have s3:PutObject permissions on this bucket\n- Provides an independent backup of exported files\n\nIMPORTANT: Celigo automatically deletes files from the source bucket after\nsuccessful export. The backup bucket is for users who want to maintain their\nown independent backup of exported files. Celigo also maintains its own backup\nof processed files for a set period of time.\n"},"keyPrefix":{"type":"string","description":"Optional prefix to prepend to keys when moving to backup bucket.\n\n- Used only when backupBucket is specified\n- Prepended to the original filename when moved to backup\n- Can contain static text or handlebars templates\n- Examples:\n  - `\"processed/\"` - places files under a processed folder\n  - `\"archive/{{date 'YYYY-MM-DD'}}/\"` - organizes by date\n\nIMPORTANT: The original file's directory structure is not preserved. Only the\nfilename is appended to this prefix in the backup location.\n"}}},"Wrapper":{"type":"object","description":"Configuration for Wrapper exports","properties":{"function":{"type":"string","description":"Function name to invoke in the wrapper export"},"configuration":{"type":"object","description":"Wrapper-specific configuration payload","additionalProperties":true}}},"Parsers":{"type":"array","description":"Configuration for parsing XML payloads (i.e. files, HTTP responses, etc.). Use this field when you need to process XML data\nand transform it into a JSON records.\n\n**Implementation notes**\n\n- This is where you configure how to parse XML data in your resource\n- Although defined as an array, you typically only need a single parser configuration\n- Currently only XML parsing is supported\n- Only configure this field when working with XML data that needs structured parsing\n","items":{"type":"object","properties":{"version":{"type":"string","description":"Version identifier for the parser configuration format. Currently only version \"1\" is supported.\n\nAlways set this field to \"1\" as it's the only supported version at this time.\n","enum":["1"]},"type":{"type":"string","description":"Defines the type of parser to use. Currently only \"xml\" is supported.\n\nWhile the system is designed to potentially support multiple parser types in the future,\nat this time only XML parsing is implemented, so this field must be set to \"xml\".\n","enum":["xml"]},"name":{"type":"string","description":"Optional identifier for the parser configuration. This field is primarily for documentation\npurposes and is not functionally used by the system.\n\nThis field can be omitted in most cases as it's not required for parser functionality.\n"},"rules":{"type":"object","description":"Configuration rules that determine how XML data is parsed and converted to JSON.\nThese settings control the structure and format of the resulting JSON records.\n\n**Parsing options**\n\nThere are two main parsing strategies available:\n- **Automatic parsing**: Simple but produces more complex output\n- **Custom parsing**: More control over the resulting JSON structure\n","properties":{"V0_json":{"type":"boolean","description":"Controls the XML parsing strategy.\n\n- When set to **true** (Automatic): XML data is automatically converted to JSON without\n  additional configuration. This is simpler to set up but typically produces more complex\n  and deeply nested JSON that may be harder to work with.\n\n- When set to **false** (Custom): Gives you more control over how the XML is converted to JSON.\n  This requires additional configuration (like listNodes) but produces cleaner, more\n  predictable JSON output.\n\nMost implementations use the Custom approach (false) for better control over the output format.\n"},"listNodes":{"type":"array","description":"Specifies which XML nodes should be treated as arrays (lists) in the output JSON.\n\nIt's not always possible to automatically determine if an XML node should be a single value\nor an array. Use this field to explicitly identify nodes that should be treated as arrays,\neven if they appear only once in the XML.\n\nEach entry should be a simplified XPath expression pointing to the node that should be\ntreated as an array.\n\nOnly relevant when V0_json is set to false (Custom parsing).\n","items":{"type":"string"}},"includeNodes":{"type":"array","description":"Limits which XML nodes are included in the output JSON.\n\nFor large XML documents, you can use this field to extract only the nodes you need,\nreducing the size and complexity of the resulting JSON. Only nodes specified here\n(and their children) will be included in the output.\n\nEach entry should be a simplified XPath expression pointing to nodes to include.\n\nOnly relevant when V0_json is set to false (Custom parsing).\n","items":{"type":"string"}},"excludeNodes":{"type":"array","description":"Specifies which XML nodes should be excluded from the output JSON.\n\nSometimes it's easier to specify which nodes to exclude rather than which to include.\nUse this field to identify nodes that should be omitted from the output JSON.\n\nEach entry should be a simplified XPath expression pointing to nodes to exclude.\n\nOnly relevant when V0_json is set to false (Custom parsing).\n","items":{"type":"string"}},"stripNewLineChars":{"type":"boolean","description":"Controls whether newline characters are removed from text values.\n\nWhen set to true, all newline characters (\\n, \\r, etc.) will be removed from\ntext content in the XML before conversion to JSON.\n","default":false},"trimSpaces":{"type":"boolean","description":"Controls whether leading and trailing whitespace is trimmed from text values.\n\nWhen set to true, all values will have leading and trailing whitespace removed\nbefore conversion to JSON.\n","default":false},"attributePrefix":{"type":"string","description":"Specifies a character sequence to prepend to XML attribute names when converted to JSON properties.\n\nIn XML, both elements and attributes can exist at the same level, but in JSON this distinction is lost.\nTo maintain the distinction between element data and attribute data in the resulting JSON, this prefix\nis added to attribute names during conversion.\n\nFor example, with attributePrefix set to \"Att-\" and an XML element like:\n```xml\n<product id=\"123\">Laptop</product>\n```\n\nThe resulting JSON would be:\n```json\n{\n  \"product\": \"Laptop\",\n  \"Att-id\": \"123\"\n}\n```\n\nThis helps maintain the distinction between element content and attribute values in the\nconverted JSON, making it easier to reference specific data in downstream processing steps.\n"},"textNodeName":{"type":"string","description":"Specifies the property name to use for element text content when an element has both\ntext content and child elements or attributes.\n\nWhen an XML element contains both text content and other nested elements or attributes,\nthis field determines what property name will hold the text content in the resulting JSON.\n\nFor example, with textNodeName set to \"value\" and an XML element like:\n```xml\n<item id=\"123\">\n  Laptop\n  <category>Electronics</category>\n</item>\n```\n\nThe resulting JSON would be:\n```json\n{\n  \"item\": {\n    \"value\": \"Laptop\",\n    \"category\": \"Electronics\",\n    \"id\": \"123\"\n  }\n}\n```\n\nThis allows for unambiguous parsing of complex XML structures that mix text content with\nchild elements. Choose a name that's unlikely to conflict with actual element names in your XML.\n"}}}}}},"Mappings":{"type":"array","description":"Array of field mapping configurations for transforming data from one format into another.\n\n**Guidance**\n\nThis schema is designed around RECURSION as its core architectural principle. Understanding this recursive\nnature is essential for building effective mappings:\n\n1. The schema is self-referential by design - a mapping can contain nested mappings of the same structure\n2. Complex data structures (nested objects, arrays of objects, arrays of arrays of objects) are ALL\n   handled through this recursive pattern\n3. Each mapping handles one level of the data structure; deeper levels are handled by nested mappings\n\nWhen generating mappings programmatically:\n- For simple fields (string, number, boolean): Create single mapping objects\n- For objects: Create a parent mapping with nested 'mappings' array containing child field mappings\n- For arrays: Use 'buildArrayHelper' with extract paths defining array inputs and\n  recursive 'mappings' to define object structures\n\nThe system will process these nested structures recursively during runtime, ensuring proper construction\nof complex hierarchical data while maintaining excellent performance.\n","items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}}},"items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}},"Transform":{"type":"object","description":"Configuration for transforming data during processing operations. This object enables\nreshaping of records.\n\n**Transformation capabilities**\n\nCeligo's transformation engine offers powerful features for data manipulation:\n- Precise field mapping with JSONPath expressions\n- Support for any level of nested arrays\n- Formula-based field value generation\n- Dynamic references to flow and integration settings\n\n**Implementation approaches**\n\nThere are two distinct transformation mechanisms available:\n\n**Rule-Based Transformation (`type: \"expression\"`)**\n- **Best For**: Most transformation scenarios from simple to complex\n- **Capabilities**: Field mapping, formula calculations, lookups, nested data handling\n- **Advantages**: Visual configuration, no coding required, intuitive interface\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear mapping requirements or need to reshape data structure\n\n**Script-Based Transformation (`type: \"script\"`)**\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Capabilities**: Full programmatic control, custom processing, complex business rules\n- **Advantages**: Maximum flexibility, can implement any transformation logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Visual transformation tools aren't sufficient for your use case\n","properties":{"type":{"type":"string","description":"Determines which transformation mechanism to use. This choice affects which properties\nmust be configured and how transformation logic is implemented.\n\n**Available types**\n\n**Rule-Based Transformation (`\"expression\"`)**\n- **Required Config**: The `expression` object with mapping definitions\n- **Behavior**: Applies declarative rules to reshape data\n- **Best For**: Most transformation scenarios from simple to complex\n- **Advantages**: Visual configuration, no coding required\n\n**Script-Based Transformation (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to transform data\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard data transformations, use `\"expression\"`\n2. For complex logic or specialized processing, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based transformations. This object enables reshaping data\nwithout requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define transformation rules that\ncan map, modify, and generate data elements.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Field mappings define how input data is transformed to target fields\n- Formulas can be used to calculate or generate new values\n- Lookups can enrich data by fetching related information\n- Mode determines how records are processed (create new or modify existing)\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"2\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"2\"\nfor current implementations.\n","enum":["2"]},"rulesTwoDotZero":{"type":"object","description":"Configuration for version 2 transformation rules. This object contains the core logic\nfor how data is mapped, enriched, and transformed.\n\n**Capabilities**\n\nTransformation 2.0 provides:\n- Precise field mapping with JSONPath expressions\n- Support for deeply nested data structures\n- Formula-based field generation\n- Dynamic lookups for data enrichment\n- Multiple operating modes to fit different scenarios\n","properties":{"mode":{"type":"string","description":"Transformation mode that determines how records are handled during processing.\n\n**Available modes**\n\n**Create Mode (`\"create\"`)**\n- **Behavior**: Builds entirely new output records from inputs\n- **Use When**: Output structure differs significantly from input\n- **Advantage**: Clean slate approach, no field inheritance\n\n**Modify Mode (`\"modify\"`)**\n- **Behavior**: Makes targeted edits to existing records\n- **Use When**: Output structure should remain similar to input\n- **Advantage**: Preserves unmapped fields from the original record\n","enum":["create","modify"]},"mappings":{"$ref":"#/components/schemas/Mappings"},"lookups":{"allOf":[{"description":"Shared lookup tables used across all mappings defined in the transformation rules.\n\n**Purpose**\n\nLookups provide centralized value translation that can be referenced from any mapping\nin your transformation configuration. They enable consistent translation of codes, IDs,\nand values between systems without duplicating translation logic.\n\n**Usage in transformations**\n\nLookups are particularly valuable in transformations for:\n\n- **Data Normalization**: Standardizing values from diverse source systems\n- **Code Translation**: Converting between different coding systems (e.g., status codes)\n- **Field Enrichment**: Adding descriptive values based on ID or code lookups\n- **Cross-Reference Resolution**: Mapping identifiers between integrated systems\n\n**Implementation**\n\nLookups are defined once in this array and referenced by name in mappings:\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"statusMapping\",\n    \"map\": {\n      \"A\": \"Active\",\n      \"I\": \"Inactive\",\n      \"P\": \"Pending\"\n    },\n    \"default\": \"Unknown Status\"\n  }\n]\n```\n\nThen referenced in mappings using the lookupName property:\n\n```json\n{\n  \"generate\": \"status\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.statusCode\",\n  \"lookupName\": \"statusMapping\"\n}\n```\n\nThe system automatically applies the lookup during transformation processing.\n\nFor complete details on lookup properties and behavior, see the Lookups schema.\n"},{"$ref":"#/components/schemas/Lookups"}]}}}}},"script":{"type":"object","description":"Configuration for programmable script-based transformations. This object enables complex, custom\ntransformation logic beyond what expression-based transformations can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to transform data according to\nspecialized business rules or complex algorithms.\n\n**Implementation approach**\n\nScript-based transformation works by:\n1. Executing the specified function from the referenced script\n2. Passing input data to the function\n3. Using the function's return value as the transformed output\n\n**Common use cases**\n\nScript transformation is ideal for:\n- Complex business logic that can't be expressed through mappings\n- Algorithmic transformations requiring computation\n- Dynamic transformations based on external factors\n- Legacy system data format compatibility\n- Multi-stage processing with intermediate steps\n\nOnly use script-based transformation when expression-based transformation is insufficient.\nScript transformation requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the transformation logic.\n\nThe referenced script should contain the function specified in the\n'function' property.\n","format":"objectid"},"function":{"type":"string","description":"Name of the function within the script to execute for transformation. This function\nmust exist in the script referenced by _scriptId.\n"}}}}},"Lookups":{"type":"array","description":"Configuration for value-to-value transformations using lookup tables.\n\n**Purpose**\n\nLookups provide a way to translate values from one system to another. They transform\ninput values into output values using either static mapping tables or\ndynamic lookup caches.\n\n**Lookup mechanisms**\n\nThere are two distinct lookup mechanisms available:\n\n1. **Static Lookups**: Define a simple key-value map object and store it as part of your resource\n   - Best for: Small, fixed sets of values that rarely change\n   - Implementation: Configure the `map` object with input-to-output value mappings\n   - Example: Country codes, status values, simple translations\n\n2. **Dynamic Lookups**: Reference an existing 'Lookup Cache' resource in your Celigo account\n   - Best for: Large datasets, frequently changing values, or complex reference data\n   - Implementation: Configure `_lookupCacheId` to reference cached data maintained independently\n   - Example: Product catalogs, customer databases, pricing information\n\n**Property usage**\n\nThere are two mutually exclusive ways to configure lookups, depending on which mechanism you choose:\n\n1. **For Static Mappings**: Configure the `map` property with a direct key-value object\n   ```json\n   \"map\": {\"US\": \"United States\", \"CA\": \"Canada\"}\n   ```\n\n2. **For Dynamic Lookups**: Configure the following properties:\n   - `_lookupCacheId`: Reference to the lookup cache resource\n   - `extract`: JSON path to extract specific value from the returned lookup object\n\n**When to use**\n\nLookups are ideal for:\n\n1. **Value Translation**: Mapping codes or IDs to human-readable values\n\n2. **Data Enrichment**: Adding related information to records during processing\n\n3. **Normalization**: Ensuring consistent formatting of values across systems\n\n**Implementation details**\n\nLookups can be referenced in:\n\n1. **Field Mappings**: Direct use in field transformation configurations\n\n2. **Handlebars Templates**: Use within templates with the syntax:\n   ```\n   {{lookup 'lookupName' record.fieldName}}\n   ```\n\n**Example usage**\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"countryCodeToName\",\n    \"map\": {\n      \"US\": \"United States\",\n      \"CA\": \"Canada\",\n      \"UK\": \"United Kingdom\"\n    },\n    \"default\": \"Unknown Country\",\n    \"allowFailures\": true\n  },\n  {\n    \"name\": \"productDetails\",\n    \"_lookupCacheId\": \"60a2c4e6f321d800129a1a3c\",\n    \"extract\": \"$.details.price\",\n    \"allowFailures\": false\n  }\n]\n```\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique identifier for the lookup table within this configuration.\n\nThis name must be unique within the scope where the lookup is defined and is used to reference\nthe lookup in handlebars templates with the syntax {{lookup 'name' value}}.\n\nChoose descriptive names that indicate the transformation purpose, such as:\n- \"countryCodeToName\" for country code to full name conversion\n- \"statusMapping\" for status code translations\n- \"departmentCodes\" for department code to name mapping\n"},"map":{"type":"object","description":"The lookup mapping table as key-value pairs.\n\nThis object contains the input values as keys and their corresponding\noutput values. When a input value matches a key in this object,\nit will be replaced with the corresponding value.\n\nThe map should be kept to a reasonable size (typically under 100 entries)\nfor optimal performance. For larger mapping requirements, consider using\ndynamic lookups instead.\n\nMaps can include:\n- Simple code to name conversions: {\"US\": \"United States\"}\n- Status transformations: {\"A\": \"Active\", \"I\": \"Inactive\"}\n- ID to name mappings: {\"100\": \"Marketing\", \"200\": \"Sales\"}\n\nValues can be strings, numbers, or booleans, but all are stored as strings\nin the configuration.\n"},"_lookupCacheId":{"type":"string","description":"Reference to a LookupCache resource that contains the reference data for the lookup.\n\n**Purpose**\n\nThis field connects the lookup to an external data source that has been cached in the system.\nUnlike static lookups that use the `map` property, dynamic lookups can reference large datasets\nor frequently changing information without requiring constant updates to the integration.\n\n**Implementation details**\n\nThe LookupCache resource referenced by this ID contains:\n- The data records to be used as a reference source\n- Configuration for how the data should be indexed and accessed\n- Caching parameters to balance performance with data freshness\n\n**Usage patterns**\n\nCommonly used to reference:\n- Product catalogs or SKU databases\n- Customer or account information\n- Pricing tables or discount rules\n- Complex business logic lookup tables\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n","format":"objectid"},"extract":{"type":"string","description":"JSON path expression that extracts a specific value from the cached lookup object.\n\n**Purpose**\n\nWhen using dynamic lookups with a LookupCache, this JSON path identifies which field to extract\nfrom the cached object after it has been retrieved using the lookup key.\n\n**Implementation details**\n\n- Must use JSON path syntax (similar to mapping extract fields)\n- Operates on the cached object returned by the lookup operation\n- Examples:\n  - \"$.name\" - Extract the name field from the top level\n  - \"$.details.price\" - Extract a nested price field\n  - \"$.attributes[0].value\" - Extract a value from the first element of an array\n\n**Usage scenario**\n\nWhen a lookup cache contains complex objects:\n```json\n// Cache entry for key \"PROD-123\":\n{\n  \"id\": \"PROD-123\",\n  \"name\": \"Premium Widget\",\n  \"details\": {\n    \"price\": 99.99,\n    \"currency\": \"USD\",\n    \"inStock\": true\n  }\n}\n```\n\nSetting extract to \"$.details.price\" would return 99.99 as the lookup result.\n\nIf no extract is provided, the entire cached object is returned as the lookup result.\n"},"default":{"type":"string","description":"Default value to use when the source value is not found in the lookup map.\n\nThis value is used as a fallback when:\n1. The source value doesn't match any key in the map\n2. allowFailures is set to true\n\nSetting an appropriate default helps prevent flow failures due to unexpected\nvalues and provides predictable behavior for edge cases.\n\nCommon default patterns include:\n- Descriptive unknowns: \"Unknown Country\", \"Unspecified Status\"\n- Original value indicators: \"{Original Value}\", \"No mapping found\"\n- Neutral values: \"Other\", \"N/A\", \"Miscellaneous\"\n\nIf allowFailures is false and no default is specified, the flow will fail\nwhen encountering unmapped values.\n"},"allowFailures":{"type":"boolean","description":"When true, missing lookup values will use the default value rather than causing an error.\n\n**Behavior control**\n\nThis field determines how the system handles source values that don't exist in the map:\n\n- true: Use the default value for missing mappings and continue processing\n- false: Treat missing mappings as errors, failing the record\n\n**Recommendation**\n\nSet this to true when:\n- New source values might appear over time\n- Data quality issues could introduce unexpected values\n- Processing should continue even with imperfect mapping\n\nSet this to false when:\n- Complete data accuracy is critical\n- All possible source values are known and controlled\n- Missing mappings indicate serious data problems that should be addressed\n\nThe best practice is typically to set allowFailures to true with a meaningful\ndefault value, so flows remain operational while alerting you to missing mappings.\n"}}}},"Form":{"type":"object","description":"Configuration for creating user-friendly settings forms that make it easier for less technical users\nto configure integration resources.\n\n**Settings form builder**\n\nThe Settings Form Builder allows you to create or edit user-friendly fields that prompt for text entry\nor selections that will be returned as settings applied to this resource. Your forms can include any\nfield types that you see elsewhere in integrator.io, such as:\n\n- Text fields\n- Dropdown selections\n- Checkboxes\n- Radio buttons\n- Date pickers\n- Multi-select fields\n- Search fields\n\nForm fields make it much easier for less technical users to work with your integration settings by:\n\n- Providing clear labels and help text\n- Enforcing validation rules\n- Offering pre-defined selection options\n- Grouping related settings logically\n- Supporting conditional visibility\n- Creating a consistent user experience\n","properties":{"form":{"type":"object","description":"Configuration that defines the structure, fields, and behavior of the settings form.\n\nThis object contains the complete definition of the form's layout, fields, validation rules,\nand interactive behaviors. The specific structure depends on the form complexity and can include\nfield definitions, sections, conditional display logic, and default values.\n\nThe form configuration is typically created and managed through the visual Form Builder interface\nrather than edited directly as JSON.\n","properties":{"fieldMap":{"type":"object","description":"A mapping of field identifiers to their configuration objects.\nEach key in this object represents a unique field ID, and the value contains\nall the configuration settings for that specific form field.\n","additionalProperties":{"type":"object","description":"Configuration for an individual form field.\n","properties":{"id":{"type":"string","description":"Unique identifier for this field within the form.\nThis value typically matches the key in the fieldMap object.\n"},"name":{"type":"string","description":"Name of the field, used as the property name when generating the settings object\nfrom the submitted form data.\n"},"type":{"type":"string","description":"The type of form control to render for this field.\n","enum":["text","checkbox","radiogroup","relativeuri","editor","keyvalue","select","multiselect","toggle","datetime","date"]},"label":{"type":"string","description":"Display label shown next to the field in the form.\n"},"description":{"type":"string","description":"Detailed explanation text that appears below the field, providing more context\nthan the label or helpText.\n"},"helpText":{"type":"string","description":"Explanatory text that appears when hovering over the help icon next to the field.\nUsed to provide additional guidance on how to use the field.\n"},"required":{"type":"boolean","description":"When true, the field must have a value before the form can be submitted.\n","default":false},"multiline":{"type":"boolean","description":"For text fields, determines whether the input should be a multi-line text area\ninstead of a single-line input.\n","default":false},"rowsMax":{"type":"integer","description":"For multiline text fields, specifies the maximum number of visible rows.\n"},"inputType":{"type":"string","description":"For text fields, specifies the HTML input type attribute to apply additional\nvalidation or specialized input behavior.\n","enum":["text","number","email","password","tel","url"]},"delimiter":{"type":"string","description":"For text fields, specifies a character to use for splitting the input into an array.\nUsed for collecting multiple values in a single text field.\n"},"mode":{"type":"string","description":"For editor fields, specifies the type of content being edited for syntax highlighting.\n","enum":["json","xml","csv","text"]},"keyName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the key input.\n"},"valueName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the value input.\n"},"showDelete":{"type":"boolean","description":"For keyvalue fields, determines whether to show a delete button for each key-value pair.\n"},"doNotAllowFutureDates":{"type":"boolean","description":"For date and datetime fields, restricts selection to dates not in the future.\n"},"skipTimezoneConversion":{"type":"boolean","description":"For datetime fields, prevents automatic timezone conversion of the date value.\n"},"options":{"type":"array","description":"For fields that present choices (select, multiselect, radiogroup, toggle), defines\nthe available options.\n","items":{"oneOf":[{"title":"Option group","type":"object","properties":{"items":{"type":"array","items":{"oneOf":[{"title":"String value","type":"string"},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]},"description":"Array of option values/labels to display in the selection control.\n"}}},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]}},"visibleWhen":{"type":"array","description":"Conditional display rules that determine when this field should be visible.\nIf empty or not provided, the field is always visible.\n","items":{"type":"object","properties":{"field":{"type":"string","description":"The ID of another field whose value controls the visibility of this field.\n"},"is":{"type":"array","items":{"type":"string"},"description":"Array of values - if the referenced field has any of these values,\nthis field will be visible.\n"}}}}}}},"layout":{"type":"object","description":"Defines how the form fields are arranged and grouped in the UI.\nThe layout can organize fields into columns, sections, or other visual groupings.\n","properties":{"type":{"type":"string","description":"The type of layout to use for the form.\n","enum":["column"]},"containers":{"type":"array","description":"Array of container objects that group fields or contain nested containers.\nEach container can represent a column, box, indented section, or collapsible section.\n","items":{"type":"object","properties":{"type":{"type":"string","description":"The visual style of the container.\n","enum":["indent","box","collapse"]},"label":{"type":"string","description":"The heading text displayed for this container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this container.\nEach ID must correspond to a key in the fieldMap object.\n"},"containers":{"type":"array","description":"Nested containers within this container. Allows for hierarchical organization\nof fields with different visual styles.\n","items":{"type":"object","properties":{"label":{"type":"string","description":"The heading text displayed for this nested container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this nested container.\n"}}}}}}}}}},"additionalProperties":true},"init":{"type":"object","description":"Configuration for custom JavaScript initialization that executes when the form is first loaded.\n\nThis object defines a JavaScript hook that prepares the form for use, sets initial field values,\nperforms validation, or otherwise customizes the form behavior before it is displayed to the user.\n\n**Function signature**\n\nThe initialization function is invoked with a single 'options' argument containing contextual information:\n```javascript\nfunction formInit(options) {\n  // Process options and return the form object\n  return options.resource.settingsForm.form;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.resource` - The current resource being configured\n- `options.parentResource` - The parent of the current resource\n- `options.grandparentResource` - The grandparent of the current resource\n- `options.license` - For integration apps, the license provisioned to the integration\n- `options.parentLicense` - For integration apps, the parent of the license\n\n\n**Common uses**\n\n- Dynamically generate field options based on resource configuration\n- Pre-populate default values from related resources\n- Apply conditional logic that depends on resource properties\n- Add, remove, or modify form fields based on user permissions or account settings\n- Fetch external data to populate selection options\n- Implement complex validation rules that depend on resource context\n- Create branching form experiences based on user selections\n\n**Return value**\n\nThe function must return a valid form object that the UI can render.\nThrowing an exception will signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called when the form\nis initialized and should handle any custom setup logic.\n\nThe function must follow the expected signature and return a valid form object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the initialization function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}}}},"Settings":{"type":"object","description":"Configuration settings that can be accessed by hooks, filters, mappings and handlebars templates at runtime.\n\nIt enables customization of the resource's logic, allowing hooks, mappings, filters, and\nhandlebars to access and apply the settings at runtime.\n\n**Usage**\n\nThe settings object can store arbitrary JSON data that you want to save with the resource.\nWhile it's often populated through a form defined in the `settingsForm` field, you can also:\n\n- Directly provide JSON settings without using a form\n- Store configuration values used by hooks and templates\n- Create resource-specific constants and parameters\n- Maintain lookup tables or mapping structures\n- Define conditional logic parameters\n\n**Accessibility**\n\nSettings are available in:\n- All handlebars fields for building dynamic payloads\n- Field mapping expressions\n- JavaScript hooks via the options object\n- Filters and transformations\n\n**Best practices**\n\nFor non-technical users, create a custom form instead of editing the JSON directly.\nThis provides a user-friendly interface for updating settings without requiring JSON knowledge.\n","additionalProperties":true},"MockOutput":{"type":"object","description":"Sample data that simulates the output from an export for testing and configuration purposes.\n\nMock output allows you to configure and test flows without executing the actual export or\nwaiting for real-time data to arrive. This is particularly useful for:\n- Initial flow configuration and testing\n- Mapping development without requiring live data\n- Generating metadata for downstream flow steps\n- Creating realistic test scenarios\n- Documenting expected data structures\n\n**Structure**\n\nThe mock output must follow the integrator.io canonical format, which consists of a\n`page_of_records` array containing record objects. Each record object has a `record`\nproperty that contains the actual data fields.\n\n```json\n{\n  \"page_of_records\": [\n    {\n      \"record\": {\n        \"field1\": \"value1\",\n        \"field2\": \"value2\",\n        ...\n      }\n    },\n    ...\n  ]\n}\n```\n\n**Usage**\n\nWhen executing a test run or configuring a flow, integrator.io will use this mock output\ninstead of executing the export to retrieve live data. This allows you to:\n- Test mappings with representative data\n- Configure downstream flow steps without waiting for real data\n- Simulate various data scenarios\n\n**Limitations**\n\n- Maximum of 10 records\n- Maximum size of 1 MB\n- Must follow the canonical format shown above\n\nMock output can be populated automatically from preview data or entered manually.\n","properties":{"page_of_records":{"type":"array","description":"Array of record objects in the integrator.io canonical format.\n\nEach item in this array represents one record that would be processed\nby the flow during execution.\n","items":{"type":"object","properties":{"record":{"type":"object","description":"Container for the actual record data fields.\n\nThe structure of this object will vary depending on the specific\nexport configuration and the source system's data structure.\n","additionalProperties":true}}}}}},"PreSave":{"type":"object","description":"Defines a JavaScript hook that executes before the resource is saved.\n\nThis hook allows for programmatic validation, transformation, or enrichment of the\nresource itself before it is persisted. It can be used to enforce business rules,\nset derived properties, or implement cross-field validations that can't be expressed\nthrough the standard UI.\n\n**Function signature**\n\nThe preSave function is invoked with a single 'options' argument containing:\n```javascript\nfunction preSave(options) {\n  // Process options and return the modified resource\n  return options.newResource;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.newResource` - The resource being saved (with pending changes)\n- `options.oldResource` - The previous version of the resource (before changes)\n\n\n**Common uses**\n\n- Enforcing complex business rules across multiple fields\n- Automatically deriving field values based on other configuration\n- Performing validation that depends on external systems or data\n- Normalizing or standardizing configuration values\n- Adding computed or derived properties\n- Implementing versioning or change tracking\n- Dynamically looking up data using the Celigo API module to enrich configuration\n\n**Return value**\n\nThe function must return the newResource object (potentially modified) to be saved.\nThrowing an exception will prevent saving and signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called just before\nthe resource is saved.\n\nThe function must follow the expected signature and return the resource object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the preSave function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}},"ResourceResponse":{"type":"object","description":"Core response fields shared by all Celigo resources","properties":{"_id":{"type":"string","format":"objectId","readOnly":true,"description":"Unique identifier for the resource.\n\nThe _id is used in:\n- API endpoints that operate on a specific resource (e.g., GET, PUT, DELETE)\n- References from other resources (e.g., flows that use this resource)\n- Job history and error tracking\n\nFormat: 24-character hexadecimal string\n"},"createdAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was initially created.\n\nThis read-only field is automatically set during resource creation and cannot\nbe modified. It provides an audit trail for when the resource was first added\nto the system, which can be useful for:\n\n- Resource lifecycle management\n- Audit and compliance reporting\n- Troubleshooting integration timelines\n- Identifying older resources that may need review\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"},"lastModified":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was most recently updated.\n\nThis read-only field is automatically updated whenever any property of the\nresource is modified. It provides an audit trail that can be used for:\n\n- Determining if a resource has changed since it was last reviewed\n- Monitoring configuration changes during troubleshooting\n- Implementing cache invalidation strategies\n- Synchronizing related resources based on modification time\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix)\nand will always be equal to or later than the createdAt timestamp.\n"},"deletedAt":{"type":["string","null"],"format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was marked for deletion.\n\nWhen this field is present and contains a valid timestamp, it indicates\nthat the resource has been soft-deleted (moved to the recycle bin) but not\nyet permanently removed from the system. This allows for recovery of\naccidentally deleted resources within a specified retention period.\n\nThe deletedAt timestamp enables:\n- Filtering deleted resources from active resource listings\n- Implementing time-based retention policies for permanent deletion\n- Tracking deletion events for audit and compliance purposes\n- Resource recovery workflows with clear timeframes\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\nWhen null or absent, the resource is considered active.\n"}},"required":["_id"]},"IAResourceResponse":{"type":"object","description":"Integration app response fields for resources that are part of integration apps","properties":{"_integrationId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the specific integration instance that contains this resource.\n\nThis field is only populated for resources that are part of an integration app\ninstallation. It contains the unique identifier (_id) of the integration\nresource that was installed in the account.\n\nThe integration instance represents a specific installed instance of an\nintegration app, with its own configuration, settings, and runtime environment.\n\nThis reference enables:\n- Tracing the resource back to its parent integration instance\n- Permission and access control based on integration ownership\n- Lifecycle management (enabling/disabling, updating, or uninstalling)\n"},"_connectorId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the integration app that defines this resource.\n\nThis field is only populated for resources that are part of an integration app.\nIt contains the unique identifier (_id) of the integration app (connector)\nthat defines the structure, behavior, and templates for this resource.\n\nThe integration app is the published template that can be installed\nmultiple times across different accounts, with each installation creating\na separate integration instance (referenced by _integrationId).\n\nThis reference enables:\n- Identifying the source integration app for this resource\n- Determining which template version is being used\n- Linking to documentation, support, and marketplace information\n"}}},"AIDescription":{"type":"object","description":"AI-generated descriptions and documentation for the resource.\n\nThis object contains automatically generated content that helps users\nunderstand the purpose, behavior, and configuration of the resource without\nrequiring them to analyze the technical details. The AI-generated content\nis sanitized and safe for display in the UI.\n","properties":{"summary":{"type":"string","description":"Brief AI-generated summary of the resource's purpose and functionality.\n\nThis concise description provides a quick overview of what the resource does,\nwhat systems it interacts with, and its primary role in the integration.\nThe summary is suitable for display in list views, dashboards, and other\ncontexts where space is limited.\n\nMaximum length: 10KB\n"},"detailed":{"type":"string","description":"Comprehensive AI-generated description of the resource's functionality.\n\nThis detailed explanation covers the resource's purpose, configuration details,\ndata flow patterns, filtering logic, and other technical aspects. It provides\nin-depth information suitable for documentation, tooltips, or detailed views\nin the administration interface.\n\nThe content may include HTML formatting for improved readability.\n\nMaximum length: 10KB\n"},"generatedOn":{"type":"string","format":"date-time","description":"Timestamp indicating when the AI description was generated.\n\nThis field helps track the freshness of the AI-generated content and\ndetermine when it might need to be regenerated due to changes in the\nresource's configuration or behavior.\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"}}},"APIM":{"type":"array","description":"Read-only field that stores information about the integration resources\npublished in the API Management (APIM) platform.\n\nThis field tracks the relationship between integrator.io resources and their\npublished counterparts in the Gravitee API Management platform, which is\ntightly integrated with the Celigo UI. When resources are \"pushed\" to Gravitee,\nthis field is populated with the relevant identifiers and statuses.\n","items":{"type":"object","properties":{"apiId":{"type":"string","description":"Identifier for the API where this integrator.io resource is published in the APIM.\n\nThis is a Gravitee resource identifier (not prefixed with underscore like Celigo IDs)\nthat uniquely identifies the API in the API Management platform.\n"},"flowId":{"type":"string","description":"Identifier for the flow within the API where this integrator.io resource is linked.\n\nWhen an API has multiple integrator.io resources linked, each resource is associated\nwith a specific flow in the API, identified by this field. This is a Gravitee\nresource identifier.\n"},"status":{"type":"string","description":"Indicates the publishing stage of the integrator.io resource in APIM.\n\nPossible values:\n- 'oaspending': The resource is published but the OpenAPI Specification (OAS) is not\n  yet published. The apiId will be updated with the API ID created in APIM.\n- 'published': The OpenAPI Specification for the integrator.io resource has been\n  successfully uploaded to APIM.\n","enum":["oaspending","published"]}}}}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}}}},"paths":{"/v1/exports":{"get":{"summary":"List exports","description":"Returns a list of all exports configured in the account.\nIf no exports exist in the account, a 204 response with no body will be returned.\n","operationId":"listExports","tags":["Exports"],"parameters":[{"$ref":"#/components/parameters/Include"},{"$ref":"#/components/parameters/Exclude"}],"responses":{"200":{"description":"Successfully retrieved list of exports","headers":{"Link":{"description":"RFC-5988 pagination links. When more pages remain, includes a `<...>; rel=\"next\"` entry;\nabsent on the final page.\n","schema":{"type":"string"}}},"content":{"application/json":{"schema":{"type":"array","items":{"$ref":"#/components/schemas/Response"}}}}},"204":{"description":"No exports exist in the account"},"401":{"$ref":"#/components/responses/401-unauthorized"}}}}}}
````

## Create an export

> Creates a new export configuration that can be used to retrieve data from applications\
> or external sources.<br>

````json
{"openapi":"3.1.0","info":{"title":"Exports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"Request":{"type":"object","description":"Fields that can be sent when creating or updating an export","properties":{"name":{"type":"string","description":"Descriptive identifier for the export resource in human-readable format.\n\nThis string serves as the primary display name for the export across the application UI and is used in:\n- API responses when listing exports\n- Error and audit logs for traceability\n- Flow builder UI components\n- Job history and monitoring dashboards\n\nWhile not required to be globally unique in the system, using descriptive, unique names is strongly recommended\nfor clarity when managing multiple integrations. The name should indicate the data source and purpose.\n\nMaximum length: 255 characters\nAllowed characters: Letters, numbers, spaces, and basic punctuation\n"},"description":{"type":"string","description":"Optional free-text field that provides additional context about the export's purpose and functionality.\n\nWhile not used for operational functionality in the API, this field serves several important purposes:\n- Helps document the intended data flow for this export\n- Provides context for other developers and systems interacting with this resource\n- Appears in the admin UI and export listings for easier identification\n- Can be used by AI agents to better understand the export's purpose when making recommendations\n\nBest practice is to include information about:\n- The source system and data being exported\n- The intended destination for this data\n- Any special filtering or business rules applied\n- Dependencies on other systems or processes\n\nMaximum length: 10240 characters\n","maxLength":10240},"_connectionId":{"format":"objectId","type":"string","description":"Reference to the connection resource that this export will use to access the external system.\n\nThis field contains the unique identifier of a connection resource that must exist in the system prior to creating the export.\nThe connection provides:\n- Authentication credentials and methods for the external system\n- Base URL and connectivity settings\n- Rate limiting and retry configurations\n- Connection-specific headers and parameters\n\nThe connection type must be compatible with the adaptorType specified for this export.\nFor example, if adaptorType is \"HTTPExport\", _connectionId must reference a connection with type \"http\".\n\nThis field is not required for webhook/listener exports.\n\nFormat: 24-character hexadecimal string\n"},"adaptorType":{"type":"string","description":"Specifies the underlying technology adapter that processes this export's operations.\n\nThis field determines:\n- Which connection types are compatible with this export\n- Which API endpoints and protocols will be used\n- Which export-specific configuration objects must be provided\n- The available features and capabilities of the export\n\nThe value must match an available adapter in the system and should correspond to the\nexternal system being accessed. For example:\n- \"HTTPExport\" for generic REST/SOAP APIs\n- \"SalesforceExport\" for Salesforce-specific operations\n- \"NetSuiteExport\" for NetSuite-specific operations\n- \"FTPExport\" for file transfers via FTP/SFTP\n- \"WebhookExport\" for realtime event listeners that receive data via incoming HTTP requests.\n\nWhen creating an export, this field must be set correctly and cannot be changed afterward\nwithout creating a new export resource.\n\nIMPORTANT: When using a specific adapter type (e.g., \"SalesforceExport\"), you must also\nprovide the corresponding configuration object (e.g., \"salesforce\").\n","enum":["HTTPExport","FTPExport","AS2Export","S3Export","NetSuiteExport","SalesforceExport","JDBCExport","RDBMSExport","MongodbExport","DynamodbExport","WrapperExport","SimpleExport","WebhookExport","FileSystemExport"]},"type":{"type":"string","description":"Defines the fundamental operational mode of the export resource. This field determines:\n- What data is extracted and how\n- Which configuration objects are required\n- How the export appears and functions in the flow builder UI\n- The export's scheduling and execution behavior\n\n**Export types and their configurations**\n\n**Standard Export (undefined/null)**\n- **Behavior**: Retrieves all available records from the source system or structured file data that needs parsing. Default behavior is to get all records from the source system.\n- **UI Appearance**: \"Export\", \"Lookup\", or \"Transfer\" (depending on configuration)\n- **Use Case**: General purpose data extraction, full data synchronization, or structured file parsing (CSV, XML, JSON, etc.)\n- **Important Note**: For file exports that PARSE file contents into records (e.g., CSV files from NetSuite file cabinet), use this standard export type (null/undefined) with the connector's file configuration (e.g., netsuite.file). Do NOT use type=\"blob\" for parsed file exports.\n\n**\"delta\"**\n- **Behavior**: Retrieves only records changed since the last execution\n- **UI Appearance**: \"Export\" or \"Lookup\" (based on isLookup field value)\n- **Required Config**: Must provide the \"delta\" object with dateField configuration\n- **Use Case**: Incremental data synchronization, change detection\n- **Dependencies**: Requires a system that supports timestamp-based filtering\n\n**\"test\"**\n- **Behavior**: Retrieves a limited subset of records (for testing purposes)\n- **UI Appearance**: \"Export\" or \"Lookup\" (based on isLookup field value)\n- **Required Config**: Must provide the \"test\" object with limit configuration\n- **Use Case**: Integration development, testing, and validation\n\n**\"once\"**\n- **Behavior**: Retrieves records one time and marks them as processed in the source\n- **UI Appearance**: \"Export\" or \"Lookup\" (based on isLookup field value)\n- **Required Config**: Must provide the \"once\" object with booleanField configuration\n- **Use Case**: One-time exports, ensuring records aren't processed twice\n- **Dependencies**: Requires a system with updateable boolean/flag fields\n\n**\"blob\"**\n- **Behavior**: Retrieves raw files without parsing them into structured data records. The file content is transferred as-is without any parsing or transformation.\n- **UI Appearance**: \"Transfer\" flow step\n- **Required Config**: Configuration varies by connector (e.g., filepath for FTP, http.type=\"file\" for HTTP, netsuite.blob for NetSuite)\n- **Use Case**: Raw file transfers for binary files (images, PDFs, executables) where file content should NOT be parsed into data records\n- **Important Note**: Do NOT use \"blob\" when you want to parse file contents into records. For file parsing (CSV, XML, JSON files), leave type as null/undefined and configure the connector's file object (e.g., netsuite.file for NetSuite). The \"blob\" type is specifically for transferring files without parsing them.\n\n**\"webhook\"**\n- **Behavior**: Creates an endpoint that listens for incoming HTTP requests\n- **UI Appearance**: \"Listener\" flow step\n- **Required Config**: Must provide the \"webhook\" object with security configurations\n- **Use Case**: Real-time integration, event-driven architecture\n- **Dependencies**: Requires external system capable of making HTTP calls\n\n**\"distributed\"**\n- **Behavior**: Creates an endpoint that listens for incoming requests from NetSuite or Salesforce\n- **UI Appearance**: \"Listener\" flow step\n- **Required Config**: Must provide the \"distributed\" object with security configurations\n- **Use Case**: Real-time integration, event-driven architecture for NetSuite or Salesforce\n- **Dependencies**: Requires NetSuite or Salesforce to be configured to send events to the endpoint\n\n**\"simple\"**\n- **Behavior**: Allows for direct file uploads via the data loader UI\n- **UI Appearance**: \"Data loader\" flow step\n- **Required Config**: Must provide the \"simple\" object with file format configuration\n- **Use Case**: Manual data uploads, user-driven data integration\n\nThe value directly affects which configuration objects must be provided in the export resource.\nFor example, if type=\"delta\", you must include a valid \"delta\" object in your configuration.\n","enum":["webhook","test","delta","once","tranlinedelta","simple","blob","distributed","stream"]},"pageSize":{"type":"integer","description":"Controls the number of records in each data page when streaming data between systems.\n\nThis field directly impacts how data is streamed from the source to destination system:\n- Records are exported in batches (pages) of this size\n- Each page is immediately sent to the destination system upon completion\n- Pages are capped at a maximum size of 5 MB regardless of record count\n- Processing continues with the next page until all data is transferred\n\nConsiderations for setting this value:\n- The destination system's API often imposes limits on batch sizes\n  (e.g., NetSuite and Salesforce have specific record limits per API call)\n- Larger values improve throughput for simple records but may cause timeouts with complex data\n- Smaller values provide more granular error recovery but increase the number of API calls\n- Finding the optimal value typically requires balancing source system export speed with\n  destination system import capacity\n\nThe value must be a positive integer. If not specified, the default value is 20.\nThere is no built-in maximum value, but practical limits are determined by:\n1. The 5 MB maximum page size limit\n2. The destination system's API constraints\n3. Memory and performance considerations\n","default":20},"dataURITemplate":{"type":"string","description":"Defines a template for generating direct links to records in the source application's UI.\n\nThis field uses handlebars syntax to create dynamic URLs or identifiers based on the exported data.\nThe template is evaluated for each record processed by the export, and the resulting URL is:\n- Stored with error records in the job history database\n- Displayed in the error logs and job monitoring UI\n- Available to downstream steps via the errorContext object\n\nThe template can reference any field in the exported record using the handlebars pattern:\n{{record.fieldName}}\n\nCommon patterns by system type:\n- Salesforce: \"https://my.salesforce.com/lightning/r/Contact/{{record.Id}}/view\"\n- NetSuite: \"https://system.netsuite.com/app/common/entity/custjob.nl?id={{record.internalId}}\"\n- Shopify: \"https://your-store.myshopify.com/admin/customers/{{record.id}}\"\n- Generic APIs: \"{{record.id}}\" or \"{{record.customer_id}}, {{record.email}}\"\n\nThis field is optional but recommended for improved error handling and debugging.\n"},"traceKeyTemplate":{"type":"string","description":"Defines a template for generating unique identifiers for each record processed by this export.\n\nThis field allows you to override the system's default record identification logic by specifying\nexactly which field(s) should be used to uniquely identify each record. The trace key is used to:\n- Track records through the entire integration process\n- Identify duplicate records in the job history\n- Match updated records to previously processed ones\n- Generate unique references in error reporting\n\nThe template uses handlebars syntax and can reference:\n- Single fields: {{record.id}}\n- Combined fields: {{join \"_\" record.customerId record.orderId}}\n- Modified fields: {{lowercase record.email}}\n\nIf a transformation is applied to the exported data before the trace key is evaluated,\nfield references should omit the \"record.\" prefix (e.g., {{id}} instead of {{record.id}}).\n\nIf not specified, the system attempts to identify a unique field in each record automatically,\nbut this may not always select the optimal field for identification.\n\nMaximum length of generated trace keys: 512 characters\n"},"oneToMany":{"$ref":"#/components/schemas/OneToMany"},"pathToMany":{"$ref":"#/components/schemas/PathToMany"},"isLookup":{"type":"boolean","description":"Controls whether this export operates as a lookup resource in integration flows.\n\nWhen set to true, this export's behavior fundamentally changes:\n- It expects and requires input data from a previous flow step\n- It uses input data to dynamically parameterize the export operation\n- The system injects input record fields into API requests via handlebars templates\n- Flow execution waits for this step to complete before proceeding\n- Results are directly passed to subsequent steps\n\nLookup exports are typically used to:\n- Retrieve additional details about records processed earlier in the flow\n- Find matching records in a target system for reference or update operations\n- Enrich data with information from external services\n- Validate data against reference sources\n\nAPI behavior differences when true:\n- Request templating uses both record context and other handlebars variables\n- Export is executed once per input record (or batch, depending on configuration)\n- Rate limiting and concurrency controls apply differently\n\nWhen false (default), the export operates in standard extraction mode, pulling data\nindependently without requiring input from previous flow steps.\n"},"groupByFields":{"$ref":"#/components/schemas/GroupBy"},"delta":{"$ref":"#/components/schemas/Delta"},"test":{"$ref":"#/components/schemas/Test"},"once":{"$ref":"#/components/schemas/Once"},"webhook":{"$ref":"#/components/schemas/Webhook"},"distributed":{"$ref":"#/components/schemas/Distributed"},"filesystem":{"$ref":"#/components/schemas/FileSystem"},"simple":{"type":"object","description":"Configuration for data loader exports that only run in data loader specific flows.\nNote: This field and all its properties are only relevant when the 'type' field is set to 'simple'.\n","properties":{"file":{"$ref":"#/components/schemas/File"}}},"http":{"$ref":"#/components/schemas/Http"},"file":{"$ref":"#/components/schemas/File"},"salesforce":{"$ref":"#/components/schemas/Salesforce"},"as2":{"$ref":"#/components/schemas/AS2"},"dynamodb":{"$ref":"#/components/schemas/DynamoDB"},"ftp":{"$ref":"#/components/schemas/FTP"},"jdbc":{"$ref":"#/components/schemas/JDBC"},"mongodb":{"$ref":"#/components/schemas/MongoDB"},"netsuite":{"$ref":"#/components/schemas/NetSuite"},"rdbms":{"$ref":"#/components/schemas/RDBMS"},"s3":{"$ref":"#/components/schemas/S3"},"wrapper":{"$ref":"#/components/schemas/Wrapper"},"parsers":{"$ref":"#/components/schemas/Parsers"},"filter":{"allOf":[{"description":"Configuration for selectively processing records from an export based on their field values.\nThis object enables precise control over which records continue through the flow.\n\n**Filter behavior**\n\nWhen configured, the filter is applied immediately after records are retrieved from the source system:\n- Records that match the filter criteria continue through the flow\n- Records that don't match are silently dropped\n- No partial record processing is performed\n\n**Available filter fields**\nThe fields available for filtering are the data fields from each record retrieved by the export.\n"},{"$ref":"#/components/schemas/Filter"}]},"inputFilter":{"allOf":[{"description":"Configuration for selectively processing input records in a lookup export.\n\nThis filter is only relevant for exports where `isLookup` is set to `true`, meaning\nthe export is being used as a flow step to retrieve additional data for records\nprocessed in previous steps.\n\n**Input filter behavior**\n\nWhen configured in a lookup export, this filter is applied to the incoming records\nbefore they are used to query the external system:\n- Only input records that match the filter criteria will trigger lookup operations\n- Records that don't match will pass through the step without being enriched\n- This can significantly improve performance by reducing unnecessary API calls\n\n**Use cases**\n\nCommon scenarios for using inputFilter include:\n- Only looking up additional data for records that meet certain criteria\n- Preventing API calls for records that already have the required data\n- Implementing conditional lookup logic based on record properties\n- Reducing API call volume to stay within rate limits\n\n**Available filter fields**\nThe fields available for filtering are the data fields from the input records\npassed to this lookup export from previous flow steps.\n"},{"$ref":"#/components/schemas/Filter"}]},"mappings":{"allOf":[{"description":"Field mapping configurations applied to the input records of a\nlookup export before the lookup HTTP request is made.\n\n**When this field is valid**\n\nCeligo only supports `mappings` on exports that meet **both**\nof the following conditions:\n\n1. `isLookup` is `true` (the export is being used as a\n   lookup step, not a source export), AND\n2. `adaptorType` is `\"HTTPExport\"` (generic REST/SOAP\n   HTTP lookup — not NetSuite, Salesforce, RDBMS, file-based,\n   or any other adaptor).\n\nFor any other combination (source exports, non-HTTP lookup\nexports), do not set this field.\n\n**Behavior when valid**\n\nWhen used on a lookup HTTP export, `mappings` transforms\neach incoming record from the upstream flow step before the\nlookup HTTP call is made:\n\n- Input records are reshaped according to the mapping rules.\n- The transformed record flows into the HTTP request — either\n  directly as the request body, or further shaped by an\n  `http.body` Handlebars template which then renders\n  against the post-mapped record.\n- Useful when the lookup target API expects a request\n  structure that differs from the upstream record shape.\n"},{"$ref":"#/components/schemas/Mappings"}]},"transform":{"allOf":[{"description":"Data transformation configuration for reshaping records during export operations.\n\n**Export-specific behavior**\n\n**Source Exports**: Transforms records retrieved from the external system before they are passed to downstream flow steps.\n\n**Lookup Exports (isLookup: true)**: Transforms the lookup results returned by the external system.\n\n**Critical requirement for lookups**\n\nFor NetSuite and most other API-based lookups, this field is **ESSENTIAL**. Raw lookup results often come in nested or complex formats that differ from what the flow requires. You **MUST** use a transform to:\n1. Flatten nested structures (e.g., `results[0].id` -> `id`)\n2. Map specific fields to the top level\n3. Handle empty results gracefully\n\nNote that transformed results are not automatically merged back into source records - merging is handled separately by the 'response mapping' configuration in your flow definition.\n"},{"$ref":"#/components/schemas/Transform"}]},"hooks":{"type":"object","description":"Defines custom JavaScript hooks that execute at specific points during the export process.\n\nThese hooks allow for programmatic intervention in the data flow, enabling custom transformations,\nvalidations, filtering, and error handling beyond what's possible with standard configuration.\n","properties":{"preSavePage":{"type":"object","description":"Hook that executes after records are retrieved from the source system but before\nthey are sent to downstream flow steps.\n\nThis hook can transform, filter, validate, or enrich each page of data before it\nenters subsequent flow steps. Common uses include flattening nested data structures,\nremoving unwanted records, or adding computed fields.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId.\n"},"_scriptId":{"type":"string","format":"objectId","description":"Reference to a predefined script resource containing hook functions.\n\nThe referenced script should contain the function specified in the\n'function' property.\n"},"_stackId":{"type":"string","format":"objectId","description":"Reference to the stack resource associated with this hook.\n\nUsed when the hook logic is part of a stack deployment.\n"},"configuration":{"type":"object","description":"Custom configuration object passed to the hook function.\n\nThis allows passing static parameters or settings to the hook script, making the script\nreusable across different exports with different configurations.\n"}}}}},"settingsForm":{"$ref":"#/components/schemas/Form"},"settings":{"$ref":"#/components/schemas/Settings"},"mockOutput":{"$ref":"#/components/schemas/MockOutput"},"_ediProfileId":{"type":"string","format":"objectId","description":"Reference to an EDI profile that this export will use for parsing X12 EDI documents.\n\nThis field contains the unique identifier of an EDI profile resource that must exist\nin the system prior to creating or updating the export. For parsing operations, the\nEDI profile provides essential settings such as:\n- Envelope-level specifications for X12 EDI documents (ISA and GS qualifiers)\n- Trading partner identifiers and qualifiers needed for validation\n- Delimiter configurations used to properly parse the document structure\n- Version information to ensure correct segment and element interpretation\n- Validation rules to verify EDI document compliance with standards\n\nIn the export context, an EDI profile is specifically required when:\n- Parsing incoming EDI documents into a structured JSON format\n- Extracting data elements from raw EDI files\n- Validating incoming EDI document structure against trading partner requirements\n- Converting EDI segments and elements into a format usable by downstream flow steps\n\nThe centralized profile approach ensures parsing consistency across all exports\nand prevents scattered configuration of parsing rules across multiple resources.\n\nFormat: 24-character hexadecimal string\n"},"_postParseListenerId":{"type":"string","format":"objectId","description":"Reference to a webhook export that will be automatically invoked after EDI parsing operations.\n\nThis field contains the unique identifier of another export resource (of type \"webhook\")\nthat will be called when an EDI file is processed, regardless of parsing success or failure.\n\n**Invocation behavior**\n\nThe listener is invoked once per file, with the following behaviors:\n\n| Scenario | Behavior |\n| --- | --- |\n| Successfully parsed EDI file | The listener is invoked with the parsed payload, with no error fields present |\n| Unable to parse EDI file | The listener is invoked with the payload and error information in the payload |\n\n**Supported adapter types**\n\nCurrently, this functionality is only supported for:\n- AS2Export (when parsing EDI files)\n- FTPExport (when parsing EDI files)\n\nSupport for additional adapters will be added in future releases.\n\n**Primary use case**\n\nThe primary purpose of this field is to enable automatic sending of functional acknowledgements\n(such as 997 or 999) after receiving EDI documents, whether the parse was successful or not.\nThis allows for immediate feedback to trading partners about document receipt and processing status.\n\nFormat: 24-character hexadecimal string\n"},"preSave":{"$ref":"#/components/schemas/PreSave"},"assistant":{"type":"string","description":"Identifier for the connector assistant used to configure this export."},"assistantMetadata":{"type":"object","additionalProperties":true,"description":"Metadata associated with the connector assistant configuration."},"sampleData":{"type":"string","description":"Sample data payload used for previewing and testing the export."},"sampleHeaders":{"type":"array","description":"Sample HTTP headers used for previewing and testing the export.","items":{"type":"object","properties":{"name":{"type":"string","description":"Header name."},"value":{"type":"string","description":"Header value."}}}},"sampleQueryParams":{"type":"array","description":"Sample query parameters used for previewing and testing the export.","items":{"type":"object","properties":{"name":{"type":"string","description":"Query parameter name."},"value":{"type":"string","description":"Query parameter value."}}}}},"required":["name"]},"OneToMany":{"type":"boolean","description":"Controls whether the resource treats child records within parent records as the primary data units.\n\n**Important: this is not for specifying where records are in an api response**\n\nIf you need to tell an export where to find the array of records in the HTTP response\nbody (e.g. \"the records are at data.items\"), use `http.response.resourcePath` instead.\n`oneToMany` serves a completely different purpose — it operates on records that have\nalready been extracted from the response.\n\n**What oneToMany actually does**\n\nWhen set to true, this field fundamentally changes how record data is processed:\n- The system will \"unwrap\" nested child records from their parent containers\n- Each child record becomes a separate output record for downstream processing\n- The pathToMany field must be set to indicate where these child records are located\n- Parent record fields can still be accessed via a special \"parent\" context\n\nThis is typically used on **lookup exports** (isLookup: true) or **imports** where\nthe incoming records contain nested arrays that need to be fanned out.\n\nCommon scenarios for enabling this option:\n- Processing order line items individually from an order export\n- Handling invoice line items from an invoice export\n- Processing individual transaction lines from journal entries\n- Extracting address records from customer exports\n\nThis setting applies for the duration of the current flow step only and does not affect\nhow data is stored or structured in other flow steps.\n\nIf false (default), the resource processes each top-level record as a single unit.\n","default":false},"PathToMany":{"type":"string","description":"Specifies the JSON path to child records when oneToMany mode is enabled.\n\nThis field is only used when oneToMany is set to true. It defines the exact location\nof child records within the parent record structure using dot notation:\n\n- Simple path: \"items\" for a direct child array field\n- Nested path: \"lines.lineItems\" for a more deeply nested array\n- Multi-level: \"details.items.subitems\" for deeply nested structures\n\nThe system uses this path to:\n- Locate the array of child records within each parent record\n- Extract each array element as a separate record for processing\n- Make both the child record data and parent context available to downstream steps\n\nImportant considerations:\n- The path must point to an array field\n- For row-based data (i.e. where Celigo models this via an array or arrays of objects), this field is not required\n- If the path is invalid or doesn't exist, the resource will report success but process zero records\n- Maximum path depth: 10 levels\n\nThis field must contain a valid JSON path expression using dot notation.\n"},"GroupBy":{"type":"array","description":"Specifies which fields to use for grouping records in the export results. When configured, records with\nthe same values in these fields will be grouped together and treated as a single record by downstream\nsteps in your flow.\n\nFor example:\n- Group sales orders by customer ID to process all orders for each customer together\n- Group journal entries by accounting period to consolidate related transactions\n- Group inventory items by location to process inventory by warehouse\n\nWhen grouping is used, the export's page size determines the maximum number of groups per page, not individual\nrecords. Note that effective grouping typically requires that records with the same group field values appear\ntogether in the export data.\n","items":{"type":"string"}},"Delta":{"type":"object","description":"Configuration object for incremental data exports that retrieve only changed records.\n\nThis object is REQUIRED when the export's type field is set to \"delta\" and should not be\nincluded for other export types. Delta exports are designed for efficient synchronization\nby retrieving only records that have been created or modified since the last execution.\n\n**Default cutoff behavior (NO USER-SUPPLIED CUTOFF)**\nIf the user prompt does not specify a cutoff timestamp, delta exports MUST default to using\nthe platform-managed *last successful run* timestamp. In integrator.io this is exposed to\nHTTP exports and scripts as the `{{lastExportDateTime}}` variable.\n\n- First run: behaves like a full export (no cutoff available yet)\n- Subsequent runs: uses `{{lastExportDateTime}}` as the lower bound (cutoff)\n\nFor optimal AI agent implementation, consider these guidelines:\n\n1. Primary configuration method depends on adapter type:\n    - For HTTP exports: Use {{lastExportDateTime}} variable in relativeURI or body\n    - For specific application adapters: Use dateField to specify timestamp fields\n\n2. The system automatically maintains the last successful run timestamp\n    - No need to store or manage timestamps in your own code\n    - First run fetches all records (equivalent to a standard export)\n    - Subsequent runs use this timestamp as the starting point\n\n3. Error handling and recovery:\n    - If an export fails, the next run uses the last successful timestamp\n    - Records created/modified during a failed run will be included in the next run\n    - The lagOffset field can be used to handle edge cases\n","properties":{"dateField":{"type":"string","description":"Specifies one or more timestamp fields to filter records by modification date.\n\n**Field behavior**\n\nThis field determines which record timestamp(s) are compared against the last successful run time\nto identify changed records. Key characteristics:\n\n- REQUIRED for most adapter types (except HTTP and REST where this field is not supported)\n- Can reference a single field or multiple comma-separated fields\n- Field(s) must exist in the source system and contain valid date/time values\n- When multiple fields are specified, they are processed sequentially\n\n**Implementation patterns**\n\n**Single Field Pattern**\n```\n\"dateField\": \"lastModifiedDate\"\n```\n- Records where lastModifiedDate > last run time are exported\n- Most common pattern, suitable for most applications\n- Works when a single field reliably tracks all changes\n\n**Multiple Field Pattern**\n```\n\"dateField\": \"createdAt,lastModified\"\n```\n- First exports records where createdAt > last run time\n- Then exports records where lastModified > last run time\n- Useful when different operations update different timestamp fields\n- Handles cases where some records only have creation timestamps\n\n**Critical adaptor-specific instruction:**\n- The adaptor type is HTTP. For HTTP exports, the \"dateField\" property MUST NOT be included in the delta configuration.\n- HTTP exports use the {{lastExportDateTime}} variable directly in the relativeURI or body instead of dateField.\n- DO NOT include \"dateField\" in your response. If you include it, the configuration will be invalid.\n\nExample HTTP query with implicit delta:\n```\n\"/api/v1/users?modified_since={{lastExportDateTime}}\"\n\nExample (Business Central) newly created records:\n```\n\"/businesscentral/companies({{record.companyId}})/customers?$filter=systemCreatedAt gt {{lastExportDateTime}}\"\n```\n\nFor Salesforce, this field is required and has the following default values:\n- LastModifiedDate\n- CreatedDate\n- SystemModstamp\n- LastActivityDate\n- LastViewedDate\n- LastReferencedDate\nAlso, any custom fields that are not listed above but are timestamp fields will be added to the default values.\n```\n"},"dateFormat":{"type":"string","description":"Defines the date/time format expected by the source system's API.\n\n**Field behavior**\n\nThis field controls how the system formats the timestamp used for filtering:\n\n- OPTIONAL: Only needed when the source system doesn't support ISO8601\n- Default: ISO8601 format (YYYY-MM-DDTHH:mm:ss.sssZ)\n- Uses Moment.js formatting tokens\n- Directly affects the format of {{lastExportDateTime}} when used in HTTP requests\n\n**Implementation patterns**\n\n**Standard Date Format**\n```\n\"dateFormat\": \"YYYY-MM-DD\"  // 2023-04-15\n```\n- For APIs that accept date-only values\n- Will truncate time portion (potentially creating a wider filter window)\n\n**Custom DateTime Format**\n```\n\"dateFormat\": \"MM/DD/YYYY HH:mm:ss\"  // 04/15/2023 14:30:00\n```\n- For APIs with specific formatting requirements\n- Especially common with older or proprietary systems\n\n**Localized Format**\n```\n\"dateFormat\": \"DD-MMM-YYYY HH:mm:ss\"  // 15-Apr-2023 14:30:00\n```\n- For systems requiring locale-specific representations\n- Often needed for ERP systems or regional applications\n\nLeave this field unset unless the source system explicitly requires a non-ISO8601 format.\n"},"lagOffset":{"type":"integer","description":"Specifies a time buffer (in milliseconds) to account for system data propagation delays.\n\n**Field behavior**\n\nThis field addresses synchronization issues caused by replication or indexing delays:\n\n- OPTIONAL: Only needed for systems with known data visibility delays\n- Value is SUBTRACTED from the last successful run timestamp\n- Creates an overlapping window to catch records that were being processed\n  during the previous export\n- Measured in milliseconds (1000ms = 1 second)\n\n**Implementation pattern**\n\nThe formula for the effective filter date is:\n```\neffectiveFilterDate = lastSuccessfulRunTime - lagOffset\n```\n\n**Common values**\n\n- 15000 (15 seconds): Typical for systems with short indexing delays\n- 60000 (1 minute): Common for systems with moderate replication lag\n- 300000 (5 minutes): For systems with significant processing delays\n\n**Diagnosis**\n\nThis field should be configured when you observe:\n- Records occasionally missing from delta exports\n- Records created/modified near the export run time being skipped\n- Inconsistent results between runs with similar data changes\n\nIMPORTANT: Setting this value too high decreases efficiency by processing\nredundant records. Set only as high as needed to avoid missed records.\n"}}},"Test":{"type":"object","description":"Configuration object for limiting data volume during development and testing.\n\nThis object is REQUIRED when the export's type field is set to \"test\" and should not be\nincluded for other export types. Test exports are designed to safely retrieve small data\nsamples without processing full datasets, making them ideal for development and validation.\n\nFor optimal AI agent implementation, consider these guidelines:\n\n1. Test exports behave identically to standard exports except for the record limit\n    - All filters, pagination, and processing logic remain intact\n    - Only the total output volume is artificially capped\n\n2. Test exports do not store state between runs\n    - Unlike delta exports, each test export starts fresh\n    - No need to reset any state when transitioning from test to production\n\n3. Common implementation scenarios:\n    - During initial integration development\n    - When diagnosing data format or transformation issues\n    - When performance testing with controlled data volumes\n    - For demonstrations or proof-of-concept implementations\n\n4. Transitioning to production:\n    - Simply change type from \"test\" to null/undefined for standard exports\n    - Change type from \"test\" to \"delta\" for incremental exports\n    - No other configuration changes are typically needed\n","properties":{"limit":{"type":"integer","description":"Specifies the maximum number of records to process in a single test export run.\n\n**Field behavior**\n\nThis field controls the data volume during test executions:\n\n- REQUIRED when the export's type field is set to \"test\"\n- Accepts integer values between 1 and 100\n- Enforced by the system regardless of pagination settings\n- Applies to top-level records (before oneToMany processing)\n\n**Implementation considerations**\n\n**Balance between volume and usefulness**\n\nThe ideal limit depends on your testing objectives:\n\n- 1-5 records: Good for initial implementation and format verification\n- 10-25 records: Useful for testing transformation logic and identifying edge cases\n- 50-100 records: Better for performance testing and data pattern analysis\n\n**System enforced maximum**\n\n```\n\"limit\": 100  // Maximum allowed value\n```\n\nThe system enforces a hard limit of 100 records for all test exports to prevent\naccidental processing of large datasets during development.\n\n**Relationship with pageSize**\n\nThe test limit overrides but does not replace the export's pageSize:\n\n- If limit < pageSize: Only one page is processed with limit records\n- If limit > pageSize: Multiple pages are processed until limit is reached\n- Either way, the total records processed will not exceed the limit value\n\nIMPORTANT: When transitioning from test to production, you don't need to remove\nthis configuration - simply change the export's type field to remove the test limit.\n","minimum":1,"maximum":100}}},"Once":{"type":"object","description":"Configuration object for flag-based exports that process records exactly once.\n\nThis object is REQUIRED when the export's type field is set to \"once\" and should not be\nincluded for other export types. Once exports use a boolean/checkbox field in the source system\nto track which records have been processed, creating a reliable idempotent data extraction pattern.\n\nFor optimal AI agent implementation, consider these guidelines:\n\n1. System behavior during execution:\n    - First, the export retrieves all records where the specified boolean field is false\n    - After successfully processing these records, the system automatically sets the field to true\n    - On subsequent runs, previously processed records are excluded\n\n2. Prerequisites in the source system:\n    - The source must have a boolean/checkbox field that can be used as a processing flag\n    - Your connection must have write access to update this field after export\n    - The field should be indexed for optimal performance\n\n3. Common implementation scenarios:\n    - One-time migrations where data should not be duplicated\n    - Processing queues where records are marked as \"processed\"\n    - Compliance scenarios requiring audit trails of exported records\n    - Implementing exactly-once delivery semantics\n\n4. Error handling behavior:\n    - If the export fails, the boolean fields remain unchanged\n    - Records will be retried on the next run\n    - No manual intervention is required for recovery\n","properties":{"booleanField":{"type":"string","description":"Specifies the API field name of the boolean/checkbox that tracks processed records.\n\n**Field behavior**\n\nThis field identifies which boolean field in the source system controls the export filtering:\n\n- REQUIRED when the export's type field is set to \"once\"\n- Must reference a valid boolean/checkbox field in the source system\n- Must be writeable by the connection's authentication credentials\n- The system performs two operations with this field:\n  1. Filters to only include records where this field is false\n  2. Updates processed records by setting this field to true\n\n**Implementation patterns**\n\n**Using dedicated tracking fields**\n```\n\"booleanField\": \"isExported\"\n```\n- Create a dedicated field specifically for integration tracking\n- Provides clear separation between business and integration logic\n- Most maintainable approach for long-term operations\n\n**Using existing status fields**\n```\n\"booleanField\": \"isProcessed\"\n```\n- Leverage existing status fields if they align with your integration needs\n- Ensure the field's meaning is compatible with your integration logic\n- Consider potential conflicts with other processes using the same field\n\n**Targeted export tracking**\n```\n\"booleanField\": \"exported_to_netsuite\"\n```\n- For systems synchronizing to multiple destinations\n- Create separate tracking fields for each destination system\n- Enables independent control of different export processes\n\n**Technical considerations**\n\n- Field updates happen in batches after each successful page of records is processed\n- The field update uses the same connection as the export operation\n- For optimal performance, the boolean field should be indexed in the source database\n- Boolean values of 0/1, true/false, and yes/no are all properly interpreted\n\nIMPORTANT: Ensure the field is not being updated by other processes, as this could\ncause records to be skipped unexpectedly. If multiple processes need to track exports,\nuse separate boolean fields for each process.\n"}}},"Webhook":{"type":"object","description":"Configuration object for real-time event listeners that receive data via incoming HTTP requests.\n\nThis object is REQUIRED when the export's type field is set to \"webhook\" and should not be\nincluded for other export types. Webhook exports create dedicated HTTP endpoints that can receive\ndata from external systems in real-time, enabling event-driven integration architectures.\n\nWhen configured, the system:\n1. Creates a unique URL endpoint for receiving HTTP requests\n2. Validates incoming requests based on your security configuration\n3. Processes the payload and passes it to subsequent flow steps\n4. Returns a configurable HTTP response to the caller\n\nFor optimal AI agent implementation, consider these guidelines:\n\n**Webhook security models**\n\nWebhooks support multiple security verification methods, each requiring different fields:\n\n1. **HMAC Verification** (Most secure, recommended for production)\n    - Required fields: verify=\"hmac\", key, algorithm, encoding, header\n    - Verifies a cryptographic signature included with each request\n    - Ensures data integrity and authenticity\n\n2. **Token Verification** (Simple shared secret)\n    - Required fields: verify=\"token\", token, path\n    - Checks for a specific token value in the request\n    - Simpler but less secure than HMAC\n\n3. **Basic Authentication** (HTTP standard)\n    - Required fields: verify=\"basic\", username, password\n    - Uses HTTP Basic Authentication headers\n    - Compatible with most HTTP clients\n\n4. **Secret URL** (Simplest but least secure)\n    - Required fields: verify=\"secret_url\", token\n    - Relies solely on URL obscurity for security\n    - The token is embedded in the webhook URL to create a unique, hard-to-guess endpoint\n    - Suitable only for non-sensitive data or testing\n\n5. **Public Key** (Advanced, for specific providers)\n    - Required fields: verify=\"publickey\", key\n    - Uses public key cryptography for verification\n    - Only available for certain providers\n\n**Response customization**\n\nYou can customize how the webhook responds to callers with these field groups:\n\n1. **Standard Success Response**\n    - Fields: successStatusCode, successBody, successMediaType, successResponseHeaders\n    - Controls how the webhook responds to valid requests\n\n2. **Challenge Response** (For subscription verification)\n    - Fields: challengeSuccessBody, challengeSuccessStatusCode, challengeSuccessMediaType, challengeResponseHeaders\n    - Controls how the webhook responds to verification/challenge requests\n\n**Implementation scenarios**\n\nWebhooks are commonly used for:\n\n1. **Real-time data synchronization**\n    - E-commerce platforms sending order notifications\n    - CRM systems delivering contact updates\n    - Payment processors reporting transaction events\n\n2. **Event-driven processes**\n    - Triggering fulfillment when orders are placed\n    - Initiating approval workflows on document submissions\n    - Executing business logic when status changes occur\n\n3. **System integration**\n    - Connecting SaaS applications without polling\n    - Building composite applications from microservices\n    - Creating fan-out architectures for event distribution\n","properties":{"provider":{"type":"string","description":"Specifies the source application sending webhook data, enabling platform-specific optimizations.\n\n**Field behavior**\n\nThis field determines how the webhook handles incoming requests:\n\n- OPTIONAL: Defaults to \"custom\" if not specified\n- When a specific provider is selected, the system:\n  1. Pre-configures appropriate security settings for that platform\n  2. Applies platform-specific payload parsing rules\n  3. May enable additional features only relevant to that provider\n\n**Implementation guidance**\n\n**Provider-specific configurations**\n\nWhen you know the exact source system, select its specific provider:\n```\n\"provider\": \"shopify\"\n```\n- Automatically configures proper HMAC verification settings\n- Optimizes payload parsing for Shopify's webhook format\n- May enable additional Shopify-specific features\n\n**Custom configuration**\n\nFor generic webhooks or unlisted providers, use custom:\n```\n\"provider\": \"custom\"\n```\n- Requires manual configuration of all security settings\n- Maximum flexibility for handling any webhook format\n- Recommended for custom applications or newer platforms\n\n**Selection criteria**\n\nChoose a specific provider when:\n- The source system is explicitly listed in the enum values\n- You want to leverage pre-configured settings\n- The integration must follow platform-specific practices\n\nChoose \"custom\" when:\n- The source system is not listed\n- You need full control over webhook configuration\n- You're building a custom interface or protocol\n\nIMPORTANT: Some providers enforce specific security methods. When selecting a\nprovider, ensure you have the necessary security credentials (tokens, keys, etc.)\nas required by that platform.\n","enum":["github","shopify","travis","travis-org","slack","dropbox","onfleet","helpscout","errorception","box","stripe","aha","jira","pagerduty","postmark","mailchimp","intercom","activecampaign","segment","recurly","shipwire","surveymonkey","parseur","mailparser-io","hubspot","integrator-extension","custom","sapariba","happyreturns","typeform"]},"verify":{"type":"string","description":"Defines the security verification method used to authenticate incoming webhook requests.\n\n**Field behavior**\n\nThis field is the primary control for webhook security:\n\n- REQUIRED for all webhook exports\n- Determines which additional security fields must be configured\n- Controls how incoming requests are validated before processing\n\n**Verification methods**\n\n**Hmac Verification**\n```\n\"verify\": \"hmac\"\n```\n- Most secure method, cryptographically verifies request integrity\n- REQUIRES: key, algorithm, encoding, header fields\n- Validates a cryptographic signature included in the request header\n- Works well with providers that support HMAC (Shopify, Stripe, GitHub, etc.)\n\n**Token Verification**\n```\n\"verify\": \"token\"\n```\n- Simple verification using a shared secret token\n- REQUIRES: token, path fields\n- Checks for a specific token value in the request body or query params\n- Good for simple scenarios with trusted networks\n\n**Basic Authentication**\n```\n\"verify\": \"basic\"\n```\n- Standard HTTP Basic Authentication\n- REQUIRES: username, password fields\n- Validates credentials sent in the Authorization header\n- Compatible with most HTTP clients and tools\n\n**Public Key**\n```\n\"verify\": \"publickey\"\n```\n- Advanced verification using public key cryptography\n- REQUIRES: key field (containing the public key)\n- Only available for certain providers that use asymmetric cryptography\n- Highest security level but more complex to configure\n\n**Secret url**\n```\n\"verify\": \"secret_url\"\n```\n- Simplest method, relies solely on the obscurity of the URL\n- REQUIRES: token field (the token is embedded in the webhook URL to create a unique, hard-to-guess endpoint)\n- Only suitable for non-sensitive data or testing environments\n- Not recommended for production use with sensitive data\n\nIMPORTANT: Choose the security method that matches your source system's capabilities.\nIf the source system supports multiple verification methods, HMAC is generally the\nmost secure option.\n","enum":["token","hmac","basic","publickey","secret_url"]},"token":{"type":"string","description":"Specifies the shared secret token value used to verify incoming webhook requests.\n\n**Field behavior**\n\nThis field defines the expected token value:\n\n- REQUIRED when verify=\"token\" or verify=\"secret_url\"\n- When verify=\"token\": must be a string that exactly matches what the sender will provide. Used with the path field to locate and validate the token in the request.\n- When verify=\"secret_url\": the token is embedded in the webhook URL to create a unique, hard-to-guess endpoint. Generate a random, high-entropy value.\n- Case-sensitive and whitespace-sensitive\n\n**Implementation guidance**\n\nThe token verification flow works as follows:\n1. The webhook receives an incoming request\n2. The system looks for the token at the location specified by the path field\n3. If the found value exactly matches this token value, the request is processed\n4. If no match is found, the request is rejected with a 401 error\n\n**Security best practices**\n\nFor maximum security:\n- Use a random, high-entropy token (32+ characters)\n- Include a mix of uppercase, lowercase, numbers, and special characters\n- Don't use predictable values like company names or common words\n- Rotate tokens periodically for sensitive integrations\n\n**Common implementations**\n\n```\n\"token\": \"3a7c4f8b2e9d1a5c6b3e7d9f2a1c5b8e\"\n```\n\n```\n\"token\": \"whsec_8fb2e91a5c6b3e7d9f2a1c5b8e3a7c4f\"\n```\n\nIMPORTANT: Never share this token in public repositories or documentation.\nTreat it as a sensitive credential similar to a password.\n"},"algorithm":{"type":"string","description":"Specifies the cryptographic hashing algorithm used for HMAC signature verification.\n\n**Field behavior**\n\nThis field determines how signatures are validated:\n\n- REQUIRED when verify=\"hmac\"\n- Must match the algorithm used by the webhook sender\n- Affects security strength and compatibility\n\n**Algorithm selection**\n\n**Sha-256 (Recommended)**\n```\n\"algorithm\": \"sha256\"\n```\n- Modern, secure hash algorithm\n- Industry standard for most new webhook implementations\n- Preferred choice for all new integrations\n- Used by Shopify, Stripe, and many modern platforms\n\n**Sha-1 (Legacy)**\n```\n\"algorithm\": \"sha1\"\n```\n- Older, less secure algorithm\n- Still used by some legacy systems\n- Only select if the provider explicitly requires it\n- GitHub webhooks used this historically\n\n**Sha-384/sha-512 (High Security)**\n```\n\"algorithm\": \"sha384\"\n\"algorithm\": \"sha512\"\n```\n- Higher security variants with longer digests\n- Use when specified by the provider or for sensitive data\n- Less common but supported by some security-focused systems\n\nIMPORTANT: This MUST match the algorithm used by the webhook sender.\nMismatched algorithms will cause all webhook requests to be rejected.\n","enum":["sha1","sha256","sha384","sha512"]},"encoding":{"type":"string","description":"Specifies the encoding format used for the HMAC signature in webhook requests.\n\n**Field behavior**\n\nThis field determines how signature values are encoded:\n\n- REQUIRED when verify=\"hmac\"\n- Must match the encoding used by the webhook sender\n- Affects how binary signature values are represented as strings\n\n**Encoding options**\n\n**Hexadecimal (hex)**\n```\n\"encoding\": \"hex\"\n```\n- Represents the signature as a string of hexadecimal characters (0-9, a-f)\n- Most common encoding for web-based systems\n- Used by many platforms including Stripe and some Shopify implementations\n- Example output: \"8f7d56a32e1c9b47d882e3aa91341f64\"\n\n**Base64**\n```\n\"encoding\": \"base64\"\n```\n- Represents the signature using base64 encoding\n- More compact than hex (about 33% shorter)\n- Used by platforms like Shopify (newer implementations) and some GitHub scenarios\n- Example output: \"j31WozbhtrHYeC46qRNB9k==\"\n\nIMPORTANT: This MUST match the encoding used by the webhook sender.\nMismatched encoding will cause all webhook requests to be rejected even if\nthe signature is mathematically correct.\n","enum":["hex","base64"]},"key":{"type":"string","description":"Specifies the secret key used to verify cryptographic signatures in incoming webhooks.\n\n**Field behavior**\n\nThis field provides the shared secret for signature verification:\n\n- REQUIRED when verify=\"hmac\" or verify=\"publickey\"\n- Contains the secret value known to both sender and receiver\n- Used with the incoming payload to validate the signature\n- Highly sensitive security credential\n\n**Implementation guidance**\n\n**For hmac verification**\n\nThe key is used in the following verification process:\n1. The webhook receives an incoming request with a signature\n2. The system computes an HMAC of the request body using this key and the specified algorithm\n3. This computed signature is compared with the signature from the request header\n4. If they match exactly, the request is authenticated and processed\n\n**Security best practices**\n\nFor maximum security:\n- Use a random, high-entropy key (32+ characters)\n- Include a mix of characters and avoid dictionary words\n- Never share this key in code repositories or logs\n- Rotate keys periodically for sensitive integrations\n- Use environment variables or secure credential storage\n\n**Common implementations**\n\n```\n\"key\": \"whsec_3a7c4f8b2e9d1a5c6b3e7d9f2a1c5b8e3a7c4f8b\"\n```\n\n```\n\"key\": \"sk_test_51LZIr9B9Y6YIwSKx8647589JKhdjs889KJsk389\"\n```\n\nIMPORTANT: This key should be treated as a highly sensitive credential,\nsimilar to a private key or password. It should never be exposed publicly\nor logged in application logs.\n"},"header":{"type":"string","description":"Specifies the HTTP header name that contains the signature for HMAC verification.\n\n**Field behavior**\n\nThis field identifies where to find the signature in incoming requests:\n\n- REQUIRED when verify=\"hmac\"\n- Must exactly match the header name used by the webhook sender\n- Case-insensitive (HTTP headers are not case-sensitive)\n\n**Common header patterns**\n\n**Platform-specific headers**\n\nMany platforms use standardized header names for their signatures:\n\n```\n\"header\": \"X-Shopify-Hmac-SHA256\"  // For Shopify webhooks\n```\n\n```\n\"header\": \"X-Hub-Signature-256\"    // For GitHub webhooks\n```\n\n```\n\"header\": \"Stripe-Signature\"        // For Stripe webhooks\n```\n\n**Generic signature headers**\n\nFor custom implementations or less common platforms:\n\n```\n\"header\": \"X-Webhook-Signature\"     // Common generic format\n```\n\n```\n\"header\": \"X-Signature\"             // Simplified format\n```\n\n**Implementation notes**\n\n- The system will look for this exact header name in incoming requests\n- If the header is not found, the request will be rejected with a 401 error\n- Some platforms may include a prefix in the header value (e.g., \"sha256=\")\n  which is handled automatically by the system\n\nIMPORTANT: This must exactly match the header name used by the webhook sender.\nIf you're unsure about the correct header name, consult the sender's documentation\nor use a tool like cURL with verbose output to inspect an example request.\n"},"path":{"type":"string","description":"Specifies the location of the verification token in incoming webhook requests.\n\n**Field behavior**\n\nThis field determines where to find the token for verification:\n\n- REQUIRED when verify=\"token\"\n- Defines a JSON path to locate the token in the request body\n- For query parameters, use the appropriate path format (typically at root level)\n\n**Implementation patterns**\n\n**Token in request body**\n\nFor tokens embedded in JSON payloads:\n\n```\n\"path\": \"meta.token\"        // For { \"meta\": { \"token\": \"xyz123\" } }\n```\n\n```\n\"path\": \"verification.key\"  // For { \"verification\": { \"key\": \"xyz123\" } }\n```\n\n**Token at root level**\n\nFor tokens in the top level of the request:\n\n```\n\"path\": \"token\"             // For { \"token\": \"xyz123\", \"data\": {...} }\n```\n\n**Token in query parameters**\n\nFor tokens sent as URL query parameters, use the parameter name:\n\n```\n\"path\": \"verify_token\"      // For /webhook?verify_token=xyz123\n```\n\n**Verification process**\n\n1. The webhook receives an incoming request\n2. The system uses this path to extract the token value\n3. The extracted value is compared with the configured token\n4. If they match exactly, the request is processed\n\nIMPORTANT: The path is case-sensitive and must exactly match the structure\nof incoming requests. For query parameters, the system automatically checks\nboth the body and query string using the provided path.\n"},"username":{"type":"string","description":"Specifies the username for webhook HTTP Basic Authentication security.\n\n**Field behavior**\n\nThis field defines one half of the Basic Authentication credentials:\n\n- REQUIRED when verify=\"basic\"\n- Used in conjunction with the password field\n- Case-sensitive string value\n- Encoded in the standard HTTP Basic Authentication format\n\n**Implementation notes**\n\nWhen Basic Authentication is used, the webhook requires incoming requests to include\nan Authorization header containing \"Basic \" followed by a base64-encoded string of\n\"username:password\".\n\nExample header:\n```\nAuthorization: Basic d2ViaG9va191c2VyOndlYmhvb2tfcGFzc3dvcmQ=\n```\n\nWhere \"d2ViaG9va191c2VyOndlYmhvb2tfcGFzc3dvcmQ=\" is the base64 encoding of\n\"webhook_user:webhook_password\".\n\n**Security considerations**\n\nBasic Authentication:\n- Is widely supported by HTTP clients and servers\n- Should ONLY be used over HTTPS to prevent credential interception\n- Provides a simple authentication mechanism but without integrity verification\n- Is less secure than HMAC verification for webhook scenarios\n\nIMPORTANT: Always use strong, unique credentials rather than generic or easily\nguessable values. Basic Authentication is less secure than HMAC for webhooks\nbut can be appropriate for simple scenarios or when working with systems that\ndon't support more advanced verification methods.\n"},"password":{"type":"string","description":"Specifies the password for webhook HTTP Basic Authentication security.\n\n**Field behavior**\n\nThis field defines the second half of the Basic Authentication credentials:\n\n- REQUIRED when verify=\"basic\"\n- Used in conjunction with the username field\n- Case-sensitive string value\n- Encoded in the standard HTTP Basic Authentication format\n\n**Implementation notes**\n\nThis password is combined with the username and encoded in base64 format for\nthe HTTP Authorization header. The webhook verifies that incoming requests contain\nthe correct encoded credentials before processing them.\n\n**Security best practices**\n\nFor maximum security:\n- Use a strong, randomly generated password (16+ characters)\n- Include a mix of uppercase, lowercase, numbers, and special characters\n- Don't reuse passwords from other systems\n- Avoid dictionary words or predictable patterns\n- Rotate passwords periodically for sensitive integrations\n\nIMPORTANT: This password should be treated as a sensitive credential.\nNever share it in public repositories, documentation, or logs. Always use\nHTTPS for webhooks using Basic Authentication to prevent credential interception.\n"},"successStatusCode":{"type":"integer","description":"Specifies the HTTP status code sent back to webhook callers after successful processing.\n\n**Field behavior**\n\nThis field controls the HTTP response status code:\n\n- OPTIONAL: Defaults to 204 (No Content) if not specified\n- Affects how webhook callers interpret the success response\n- Must be a valid HTTP status code in the 2xx range\n\n**Common status codes**\n\n**204 No Content (Default)**\n```\n\"successStatusCode\": 204\n```\n- Returns no response body\n- Most efficient option as it minimizes response size\n- Appropriate when the caller doesn't need confirmation details\n- Automatically disables successBody (even if specified)\n\n**200 ok**\n```\n\"successStatusCode\": 200\n```\n- Standard success response\n- Allows returning a response body with details\n- Most widely used and recognized success code\n- Compatible with all HTTP clients\n\n**202 Accepted**\n```\n\"successStatusCode\": 202\n```\n- Indicates request was accepted for processing but may not be complete\n- Appropriate for asynchronous processing scenarios\n- Signals that the webhook was received but full processing is pending\n\n**Implementation considerations**\n\nThe appropriate status code depends on your webhook caller's expectations:\n\n- Some systems require specific status codes to consider the delivery successful\n- If the caller retries on anything other than 2xx, use 200 or 202\n- If the caller needs confirmation details, use 200 with a response body\n- If efficiency is paramount, use 204 (default)\n\nIMPORTANT: When using 204 No Content, any successBody configuration will be ignored\nas this status code specifically indicates no response body is being returned.\n","default":204},"successBody":{"type":"string","description":"Specifies the HTTP response body sent back to webhook callers after successful processing.\n\n**Field behavior**\n\nThis field controls the content returned to the webhook caller:\n\n- OPTIONAL: Defaults to empty (no body) if not specified\n- Ignored when successStatusCode is 204 (No Content)\n- Content type is determined by the successMediaType field\n- Can contain static text or structured data (JSON, XML)\n\n**Implementation patterns**\n\n**Simple acknowledgment**\n```\n\"successBody\": \"OK\"\n```\n- Minimal plaintext response\n- Confirms receipt without details\n- Most efficient for basic acknowledgment\n\n**Structured response (JSON)**\n```\n\"successBody\": \"{\\\"success\\\":true,\\\"message\\\":\\\"Webhook received\\\"}\"\n```\n- Provides structured data about the result\n- Can include more detailed status information\n- Compatible with programmatic processing by the caller\n- Remember to escape quotes in JSON strings\n\n**Custom confirmation**\n```\n\"successBody\": \"{\\\"status\\\":\\\"received\\\",\\\"timestamp\\\":\\\"{{currentDateTime}}\\\"}\"\n```\n- Can include dynamic values using handlebars templates\n- Useful for providing receipt confirmation with metadata\n\n**Response flow**\n\nThe response body is sent after the webhook payload has been:\n1. Received and authenticated\n2. Validated against any configured requirements\n3. Accepted for processing by the system\n\nIMPORTANT: The successBody will only be returned if successStatusCode is NOT 204.\nIf you want to return a body, make sure to set successStatusCode to 200, 201, or 202.\n"},"successMediaType":{"type":"string","description":"Specifies the Content-Type header for successful webhook responses.\n\n**Field behavior**\n\nThis field controls how the response body is interpreted:\n\n- OPTIONAL: Defaults to \"json\" if not specified\n- Only relevant when returning a successBody and not using status code 204\n- Determines the Content-Type header in the HTTP response\n- Must be consistent with the actual format of the successBody\n\n**Media type options**\n\n**Json (Default)**\n```\n\"successMediaType\": \"json\"\n```\n- Sets Content-Type: application/json\n- Use when successBody contains valid JSON\n- Most common for API responses\n- Allows structured data that clients can parse programmatically\n\n**Xml**\n```\n\"successMediaType\": \"xml\"\n```\n- Sets Content-Type: application/xml\n- Use when successBody contains valid XML\n- Necessary for systems expecting XML responses\n- Less common in modern APIs but still used in some enterprise systems\n\n**Plain Text**\n```\n\"successMediaType\": \"plaintext\"\n```\n- Sets Content-Type: text/plain\n- Use for simple string responses\n- Most compatible option for basic acknowledgments\n- Appropriate when successBody is just \"OK\" or similar\n\n**Implementation considerations**\n\n- The media type must match the actual content format in successBody\n- If returning JSON in successBody, use \"json\" (most common)\n- If returning a simple text acknowledgment, use \"plaintext\"\n- If the caller specifically requires XML, use \"xml\"\n\nIMPORTANT: When successStatusCode is 204 (No Content), this field has no effect\nsince no body is returned, and therefore no Content-Type is needed.\n","default":"json","enum":["json","xml","plaintext"]},"successResponseHeaders":{"type":"array","description":"Defines custom HTTP headers to include in successful webhook responses.\n\n**Field behavior**\n\nThis field allows additional HTTP headers in the response:\n\n- OPTIONAL: If omitted, only standard headers are included\n- Each entry defines a name/value pair for a single header\n- Applied to all successful responses (regardless of status code)\n- Can override standard headers like Content-Type\n\n**Implementation patterns**\n\n**Standard use cases**\n\nCustom headers are useful for:\n- Providing metadata about the response\n- Enabling CORS for browser-based webhook callers\n- Including tracking or correlation IDs\n- Adding custom security headers\n\n**Common header examples**\n\nCORS support:\n```json\n[\n  {\"name\": \"Access-Control-Allow-Origin\", \"value\": \"*\"},\n  {\"name\": \"Access-Control-Allow-Methods\", \"value\": \"POST, OPTIONS\"}\n]\n```\n\nRequest tracking:\n```json\n[\n  {\"name\": \"X-Request-ID\", \"value\": \"{{jobId}}\"},\n  {\"name\": \"X-Webhook-Received\", \"value\": \"{{currentDateTime}}\"}\n]\n```\n\nCustom application headers:\n```json\n[\n  {\"name\": \"X-API-Version\", \"value\": \"1.0\"},\n  {\"name\": \"X-Processing-Status\", \"value\": \"accepted\"}\n]\n```\n\n**Technical details**\n\n- Header names are case-insensitive as per HTTP specification\n- Some headers like Content-Type can be set via other fields (successMediaType)\n- Headers defined here take precedence over automatically set headers\n- The values can contain handlebars expressions for dynamic content\n\nIMPORTANT: Be careful when setting security-related headers like\nAccess-Control-Allow-Origin, as improper values could create security vulnerabilities.\n","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}}},"challengeResponseHeaders":{"type":"array","description":"Defines custom HTTP headers to include in webhook challenge responses.\n\n**Field behavior**\n\nThis field configures headers for subscription verification:\n\n- OPTIONAL: If omitted, only standard headers are included\n- Only used for webhook verification/challenge requests\n- Each entry defines a name/value pair for a single header\n- Particularly important for platforms requiring specific verification headers\n\n**Challenge verification context**\n\nMany webhook providers implement a verification process:\n1. Before sending real events, they send a \"challenge\" request\n2. The webhook must respond with specific headers and/or body content\n3. Only after successful verification will real webhook events be sent\n\nThis field allows customizing the headers sent during this verification step.\n\n**Common patterns by platform**\n\n**Facebook/Instagram**\n```json\n[\n  {\"name\": \"Content-Type\", \"value\": \"text/plain\"}\n]\n```\n\n**Slack**\n```json\n[\n  {\"name\": \"Content-Type\", \"value\": \"application/json\"}\n]\n```\n\n**Custom implementations**\n```json\n[\n  {\"name\": \"X-Challenge-Response\", \"value\": \"passed\"},\n  {\"name\": \"X-Verification-Status\", \"value\": \"success\"}\n]\n```\n\nIMPORTANT: The specific headers required vary by platform. Consult the webhook\nprovider's documentation for the exact verification requirements. Incorrect challenge\nresponse headers may prevent successful webhook subscription.\n","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}}},"challengeSuccessBody":{"type":"string","description":"Specifies the HTTP response body for webhook challenge/verification requests.\n\n**Field behavior**\n\nThis field defines the verification response content:\n\n- OPTIONAL: If omitted, a default empty response is sent\n- Only used for webhook subscription verification requests\n- Content type is determined by the challengeSuccessMediaType field\n- Often needs to contain specific values expected by the webhook provider\n\n**Verification patterns by platform**\n\nDifferent webhook providers implement different verification mechanisms:\n\n**Facebook/Instagram**\n                \"challengeSuccessBody\": \"{{hub.challenge}}\"\n```\n- Must echo back the challenge value sent in the request\n- Uses handlebars expression to access the challenge parameter\n\n**Slack**\n```\n\"challengeSuccessBody\": \"{\\\"challenge\\\":\\\"{{challenge}}\\\"}\"\n```\n- Returns the challenge value in a JSON structure\n- Required for Slack's Events API verification\n\n**Generic challenge-response**\n```\n\"challengeSuccessBody\": \"{\\\"verified\\\":true,\\\"timestamp\\\":\\\"{{currentDateTime}}\\\"}\"\n```\n- Simple confirmation response for custom implementations\n- Can include additional metadata as needed\n\n**Implementation considerations**\n\n- The exact format is dictated by the webhook provider's requirements\n- Some platforms require echoing back specific request parameters\n- Others require a structured response with specific fields\n- Handlebars expressions ({{variable}}) can access request data\n\nIMPORTANT: Incorrect challenge responses will prevent webhook subscription verification.\nAlways consult the webhook provider's documentation for exact requirements.\n"},"challengeSuccessStatusCode":{"type":"integer","description":"Specifies the HTTP status code for webhook challenge/verification responses.\n\n**Field behavior**\n\nThis field controls the verification response status:\n\n- OPTIONAL: Defaults to 200 (OK) if not specified\n- Only used for webhook subscription verification requests\n- Must match what the webhook provider expects for successful verification\n- Most platforms require a 200 OK response\n\n**Common status codes for verification**\n\n**200 ok (Default)**\n```\n\"challengeSuccessStatusCode\": 200\n```\n- Standard success response\n- Most webhook platforms expect this status code\n- Generally the safest option for verification\n\n**201 Created**\n```\n\"challengeSuccessStatusCode\": 201\n```\n- Used by some systems to indicate subscription was created\n- Less common for verification but used in some custom implementations\n\n**Platform-specific requirements**\n\nMost major webhook providers require specific status codes:\n\n- Facebook/Instagram: 200\n- Slack: 200\n- GitHub: 200\n- Shopify: 200\n- Stripe: 200\n\nIMPORTANT: Using the wrong status code will cause the verification to fail.\nIf you're unsure, keep the default 200 status code, as it's the most widely\naccepted for webhook verifications.\n","default":200},"challengeSuccessMediaType":{"type":"string","description":"Specifies the Content-Type header for webhook challenge/verification responses.\n\n**Field behavior**\n\nThis field controls the challenge response format:\n\n- OPTIONAL: Defaults to \"json\" if not specified\n- Only used for webhook subscription verification requests\n- Determines the Content-Type header in the verification response\n- Must match the format of the challengeSuccessBody content\n\n**Common media types for verification**\n\n**Json (Default)**\n```\n\"challengeSuccessMediaType\": \"json\"\n```\n- Sets Content-Type: application/json\n- Required by Slack and many modern webhook providers\n- Use when returning structured verification data\n\n**Plain Text**\n```\n\"challengeSuccessMediaType\": \"plaintext\"\n```\n- Sets Content-Type: text/plain\n- Required by Facebook/Instagram webhook verification\n- Use when the challenge response is a simple string\n\n**Xml**\n```\n\"challengeSuccessMediaType\": \"xml\"\n```\n- Sets Content-Type: application/xml\n- Less common but used by some enterprise systems\n- Use only when the webhook provider specifically requires XML\n\n**Platform-specific requirements**\n\n- Facebook/Instagram: plaintext (when echoing hub.challenge)\n- Slack: json (for Events API verification)\n- Most modern APIs: json\n\nIMPORTANT: The media type must match both the format of your challengeSuccessBody\nand the requirements of the webhook provider. Mismatched content types can cause\nverification to fail even if the response body is correct.\n","default":"json","enum":["json","xml","plaintext"]}}},"Distributed":{"type":"object","description":"Configuration object for distributed exports that require authentication.\n\nThis object contains authentication credentials needed for distributed processing.\n","properties":{"bearerToken":{"type":"string","description":"Bearer token for authenticating distributed export requests.\n\n**Field behavior**\n\nThis token provides authentication for the distributed export:\n\n- Required for secure access to distributed endpoints\n- Must be a valid bearer token format\n- Used in Authorization header as \"Bearer {token}\"\n- Should be kept secure and rotated regularly\n\n**Implementation guidance**\n\n**Token management**\n\n- Store tokens securely (encrypted at rest)\n- Implement token rotation policies\n- Monitor token expiration dates\n- Use environment variables for token storage\n\n**Security considerations**\n\n- Never log bearer tokens in plain text\n- Implement proper access controls\n- Use HTTPS for all token transmissions\n- Validate tokens on each request\n\nIMPORTANT: Bearer tokens provide full access to the distributed export.\nTreat them as sensitive credentials.\n","format":"password"}},"required":["bearerToken"],"additionalProperties":false},"FileSystem":{"type":"object","description":"Configuration for FileSystem exports","properties":{"directoryPath":{"type":"string","description":"Directory path to retrieve files from (required)"}},"required":["directoryPath"]},"File":{"type":"object","description":"Configuration for file processing in exports. This object defines how files are parsed,\nfiltered, and processed across all file-based export operations within Celigo.\n\n**Export contexts**\n\nThis schema applies to multiple file-based export scenarios:\n\n1. **Source System Types**:\n    - Simple exports with file uploads through the UI\n    - HTTP exports retrieving files from web sources\n    - FTP/SFTP exports downloading files from servers\n    - Amazon S3\n    - Azure Blob Storage\n    - Google Cloud Storage\n    - And other file-based source systems\n\n**Implementation guidelines**\n\nAI agents should consider these key decision points when configuring file processing for exports:\n\n1. **File Format Selection**: Set the `type` field to match the format of the files being processed\n    (csv, json, xlsx, xml). This determines which format-specific configuration object to populate.\n\n2. **Processing Mode**: Set the `output` field based on whether you need to:\n    - Parse file contents into records (`\"records\"`)\n    - Transfer files without parsing (`\"blobKeys\"`)\n    - Only retrieve metadata about files (`\"metadata\"`)\n\n3. **File Filtering**: Use the `filter` object to selectively process files based on criteria\n    like file names, sizes, or custom logic.\n\n4. **Format-Specific Configuration**: Configure the corresponding object (csv, json, xlsx, xml)\n    based on the selected file type.\n\n**Field dependencies**\n\n- When `type` = \"csv\", configure the `csv` object\n- When `type` = \"json\", configure the `json` object\n- When `type` = \"xlsx\", configure the `xlsx` object\n- When `type` = \"xml\", configure the `xml` object\n- When `type` = \"filedefinition\", configure the `fileDefinition` object\n\n**Export-specific considerations**\n\nWhile the file processing configuration remains consistent, different export types may have\nadditional requirements:\n\n- **HTTP Exports**: May need authentication and specific endpoint configurations\n- **FTP/SFTP Exports**: Require server credentials and path information\n- **Cloud Storage Exports**: Need bucket/container details and access credentials\n\nThe File schema focuses specifically on how files are processed once they are\nretrieved from the source system, regardless of which export type is used.\n","properties":{"encoding":{"type":"string","description":"Character encoding used for reading and parsing file content. This setting is critical for ensuring proper character interpretation, especially for international data and special characters.\n\n**Encoding options and usage guidance**\n\n**Utf-8 (`\"utf8\"`)**\n- **Default Setting**: Used if no encoding is specified\n- **Best For**: Modern text files, international character sets, XML/JSON files\n- **Compatibility**: Universally supported; standard for web applications\n- **When to Use**: First choice for most new integrations; handles most languages\n\n**Windows-1252 (`\"win1252\"`)**\n- **Best For**: Legacy Windows system files, older Western European data\n- **Compatibility**: Common in Windows-based exports, especially older systems\n- **When to Use**: When files originate from older Windows systems or contain certain special characters not rendering properly in utf8\n\n**Utf-16le (`\"utf-16le\"`)**\n- **Best For**: Unicode text with extensive character requirements\n- **Compatibility**: Microsoft Word documents, some database exports\n- **When to Use**: When files have Byte Order Mark (BOM) or are known to be 16-bit Unicode\n\n**Gb18030 (`\"gb18030\"`)**\n- **Best For**: Chinese character sets\n- **Compatibility**: Official character set standard for China\n- **When to Use**: For files containing simplified or traditional Chinese characters\n\n**Mac Roman (`\"macroman\"`)**\n- **Best For**: Legacy Mac system files (pre-OS X)\n- **Compatibility**: Older Apple systems and applications\n- **When to Use**: For older files created on Apple systems\n\n**Iso-8859-1 (`\"iso88591\"`)**\n- **Best For**: Western European languages\n- **Compatibility**: Widely supported in older systems\n- **When to Use**: For legacy European language content\n\n**Shift jis (`\"shiftjis\"`)**\n- **Best For**: Japanese character sets\n- **Compatibility**: Common in Japanese Windows and older systems\n- **When to Use**: For files containing Japanese text\n\n**Implementation guidance for ai agents**\n\n1. **Detection Strategy**: If encoding is unknown, first try utf8 (default), then try win1252 for Western language files with errors\n\n2. **Encoding Selection Process**:\n    - Check source system documentation for encoding specifications\n    - For files with corrupt/missing characters, test alternative encodings\n    - Consider geographic origin of data (Asian languages often require specific encodings)\n\n3. **Common Issues to Watch For**:\n    - Mojibake (garbled text): Indicates wrong encoding selection\n    - Question marks or boxes: Character conversion failures\n    - BOM markers appearing as visible characters: Consider utf-16le\n","enum":["utf8","win1252","utf-16le","gb18030","macroman","iso88591","shiftjis"]},"type":{"type":"string","description":"Defines the format of the files being processed. REQUIRED for all file-based exports except blob exports (export type \"blob\" or file output \"blobKeys\").\n\nThis field creates a critical dependency that determines which format-specific configuration object must be populated.\n\n**Format options and requirements**\n\n**Csv Files (`\"csv\"`)**\n- **Use For**: Tabular data with delimiter-separated values\n- **Required Config**: The `csv` object with settings like delimiters and header options\n- **Best For**: Simple tabular data, exports from spreadsheets, flat data structures\n- **Example Sources**: Exported reports, data extracts, simple database dumps\n\n**Json Files (`\"json\"`)**\n- **Use For**: Hierarchical data in JavaScript Object Notation\n- **Required Config**: The `json` object, especially the `resourcePath` to locate records\n- **Best For**: Nested data structures, API responses, complex object representations\n- **Example Sources**: REST APIs, document databases, configuration files\n\n**Excel Files (`\"xlsx\"`)**\n- **Use For**: Microsoft Excel spreadsheets\n- **Required Config**: The `xlsx` object with Excel-specific settings\n- **Best For**: Business reports, formatted tabular data, multi-sheet documents\n- **Example Sources**: Financial reports, manually created spreadsheets\n\n**Xml Files (`\"xml\"`)**\n- **Use For**: Extensible Markup Language documents\n- **Required Config**: The `xml` object, critically the `resourcePath` using XPath\n- **Best For**: Document-oriented data, SOAP responses, EDI formats\n- **Example Sources**: SOAP APIs, legacy system exports, industry standard formats\n\n**File Definition (`\"filedefinition\"`)**\n- **Use For**: Complex proprietary formats requiring custom parsing logic\n- **Required Config**: The `fileDefinition` object with the _fileDefinitionId\n- **Best For**: Legacy formats, fixed-width files, complex multi-record formats\n- **Example Sources**: Mainframe exports, proprietary formats, EDI documents\n\n**Implementation guidance**\n\n1. Determine the file format from the source system or documentation\n2. Select the matching type from the enum values\n3. Configure ONLY the corresponding format-specific object\n4. Other format-specific objects will be ignored\n\nFor AI agents: This field creates a critical dependency chain - selecting a type\ncommits you to using the corresponding configuration object.\n","enum":["csv","json","xlsx","xml","filedefinition"]},"output":{"type":"string","description":"Defines the fundamental processing mode for file data. This critical field determines how files are handled and what data is passed to subsequent flow steps.\n\n**Processing modes**\n\n**Content Processing (`\"records\"`)**\n- **Behavior**: Files are parsed into structured records based on their format\n- **Use When**: You need to access and manipulate the data inside files\n- **Output**: Array of record objects reflecting the file's content\n- **Example Flow**: CSV files → Parse into records → Transform → Import to target system\n- **Best For**: Data synchronization, ETL processes, content-based workflows\n- **Technical Impact**: Requires format-specific parsing; higher processing overhead\n\n**File Transfer (`\"blobKeys\"`)**\n- **Behavior**: Files are treated as binary objects and transferred without parsing\n- **Use When**: You need to move files between systems without modifying content\n- **Output**: References to the binary file objects (blobKeys)\n- **Example Flow**: Image files → Transfer as blobs → Upload to cloud storage\n- **Best For**: Binary files, images, documents, any non-textual content\n- **Technical Impact**: Lower processing overhead; maintains file integrity\n\n**File Discovery (`\"metadata\"`)**\n- **Behavior**: Only file metadata is retrieved (name, size, dates) without content\n- **Use When**: You need to inventory files before deciding which to process\n- **Output**: Array of file metadata objects\n- **Example Flow**: Scan FTP folder → Get metadata → Filter by date → Process selected files\n- **Best For**: File inventory, selective processing, large directory scanning\n- **Technical Impact**: Minimal processing overhead; fastest operation mode\n\n**Implementation guidance**\n\nThis setting profoundly affects flow architecture:\n\n1. For data integration: Use `\"records\"` to work with the file contents\n2. For file movement: Use `\"blobKeys\"` to preserve binary integrity\n3. For file discovery: Use `\"metadata\"` as a first step before selective processing\n\nAI agents should select this value based on whether the integration needs to\nwork with the file's content or just move/manage the files themselves.\n","enum":["records","metadata","blobKeys"]},"skipDelete":{"type":"boolean","description":"Controls whether source files are retained or deleted after successful processing. This setting has significant implications for data lifecycle management and system storage.\n\n**Behavior**\n\n- **When true**: Source files remain on the file server after processing\n- **When false** (default): Source files are automatically deleted after successful processing\n- **Error Handling**: Files are only deleted after SUCCESSFUL processing; failed files remain intact\n\n**Decision factors for ai agents**\n\nConsider recommending `skipDelete: true` when:\n\n1. **Compliance Requirements**:\n    - Regulatory frameworks require source file retention (GDPR, HIPAA, SOX)\n    - Audit trails need to maintain original file evidence\n    - Data retention policies mandate preserving source files\n\n2. **Operational Needs**:\n    - Files need to be processed by multiple different flows\n    - Source files serve as disaster recovery backups\n    - Re-processing might be required (for testing or validation)\n    - Source systems do not maintain their own copy of the files\n\nConsider recommending `skipDelete: false` (default) when:\n\n1. **Storage Optimization**:\n    - Working with large files that would consume significant storage\n    - High volume of files processed frequently\n    - Files are already backed up elsewhere\n    - Storage costs are a concern\n\n2. **Security Considerations**:\n    - Files contain sensitive data that should be minimized\n    - \"Clean workspace\" policies are in place\n    - Source files represent a potential security liability\n\n**Implementation guidance**\n\n- **Storage Planning**: When `skipDelete: true`, ensure sufficient storage is available for file accumulation\n- **File Organization**: Consider implementing an archiving strategy for retained files\n- **Monitoring**: Set up space monitoring when retaining files to prevent storage exhaustion\n- **Cleanup Automation**: If files must be retained but eventually deleted, consider a separate cleanup job\n\n**Integration patterns**\n\n- **Multi-stage Processing**: Set to `true` for files that need multi-step processing in separate flows\n- **Extract-Transform-Archive**: Set to `true` when original files need archiving after extraction\n- **Single-use Import**: Set to `false` for one-time imports where originals have no further value\n\n**Technical considerations**\n\nThis setting only affects the source file server. Records extracted from the files and processed through the flow are not affected by this setting - they continue through your integration regardless of this value.\n"},"compressionFormat":{"type":"string","description":"Specifies the compression format of the files being processed. This setting enables the system to automatically decompress files before parsing their contents.\n\n**Compression options**\n\n**Gzip (`\"gzip\"`)**\n- **File Extension**: Typically .gz, .gzip\n- **Characteristics**: Single-file compression, maintains original file name in metadata\n- **Compression Ratio**: Moderate to high, depends on file type (5-75% size reduction)\n- **Common Sources**: Linux/Unix systems, database exports, API response payloads\n- **Use Cases**: Individual file transfers, API response handling, log files\n\n**Zip (`\"zip\"`)**\n- **File Extension**: .zip\n- **Characteristics**: Archive format that can contain multiple files/directories\n- **Compression Ratio**: Moderate (usually 30-60% size reduction)\n- **Common Sources**: Windows systems, manual exports, email attachments\n- **Use Cases**: Multi-file packages, email attachments, mixed-format content\n\n**Implementation guidance for ai agents**\n\n**When to Configure Compression**\n\n1. **Source System Behavior**:\n    - Set when the source system always delivers compressed files\n    - Leave blank when files are delivered uncompressed\n    - NEVER set when files are sometimes compressed, sometimes not (this will cause errors)\n\n2. **Selection Criteria**:\n    - Examine file extensions (.zip, .gz) in the source system\n    - Check source system documentation for compression specifications\n    - Consider typical OS of the source (.zip for Windows, .gz for Unix/Linux)\n\n3. **Multi-file Considerations**:\n    - For .zip files containing multiple files, all files will be processed individually\n    - For nested compression (e.g., .gz files inside .zip), only the outer compression is handled\n\n**Technical Implementation Notes**\n\n- **Processing Flow**: Files are decompressed before any other processing (parsing, filtering)\n- **Filename Handling**: After decompression, the contained file name(s) are used for subsequent operations\n- **Error Conditions**: If a file is marked as compressed but isn't actually compressed, processing will fail\n- **Mixed Formats**: If files might arrive in different compression formats, use separate exports for each format\n\n**Performance Considerations**\n\n- **Network Efficiency**: Compressed files reduce transfer time from source to integration platform\n- **Processing Overhead**: Decompression adds slight processing time but saves network time\n- **Storage Impact**: Compressed files use less storage in transit but are decompressed for processing\n\nLeave this field blank if files are not compressed. Setting an incorrect compression format will cause processing errors.\n","enum":["gzip","zip"]},"purgeInternalBackup":{"type":"boolean","description":"Controls whether Celigo's internal backup system retains copies of processed files. This setting affects data retention, recovery capabilities, and compliance posture.\n\n**Behavior**\n\n- **When false** (default): Celigo maintains copies of all processed files for the duration of your account's retention policy\n- **When true**: Celigo will NOT keep internal backup copies of files processed by this specific export\n- **Scope**: This setting applies only to this specific export configuration; other exports are unaffected\n\n**Decision factors for ai agents**\n\n**Scenarios to Recommend `purgeInternalBackup: true`**\n\n1. **Data Sensitivity Requirements**:\n    - Files contain highly sensitive information (PII, PHI, financial, etc.)\n    - Data residency/sovereignty requirements prohibit additional copies\n    - Zero-retention policies mandate immediate deletion after processing\n    - Compliance frameworks require minimizing data copies (GDPR, HIPAA)\n\n2. **Technical Considerations**:\n    - Very large files where storage costs are significant\n    - Files that are already reliably backed up in source systems\n    - Files with very short-lived relevance (e.g., temporary processing files)\n    - Processing of non-production/test data that doesn't require retention\n\n**Scenarios to Recommend `purgeInternalBackup: false` (Default)**\n\n1. **Recovery Requirements**:\n    - Files represent critical business data with recovery needs\n    - Source systems don't maintain reliable backups\n    - Reprocessing capabilities are needed for disaster recovery\n    - Audit trails require evidence of processed files\n\n2. **Operational Benefits**:\n    - Troubleshooting integration issues requires access to source files\n    - Files might need reprocessing in case of downstream errors\n    - Historical analysis or validation may be required\n    - Protection against source system data loss\n\n**Implementation guidance**\n\n**Governance Considerations**\n\n- **Data Lifecycle**: Setting to `true` permanently removes files from Celigo after processing\n- **Recovery Impact**: Without backups, recovery from certain errors may require re-obtaining files from source systems\n- **Audit Trail**: Consider if processed files need to be available for future audits or investigations\n\n**Best Practices**\n\n- **Document Decision**: When setting to `true`, document the rationale for disabling backups\n- **Retention Alignment**: Ensure this setting aligns with overall data retention policies\n- **Risk Assessment**: Evaluate recovery needs against data minimization requirements\n- **Consistency**: Apply consistent backup settings across similar data types\n\n**System Impact**\n\nThis setting does NOT affect:\n- The processing of files during integration runs\n- Source files on their original servers (see `skipDelete` for that)\n- Storage of processed data records in the target system\n\nIt ONLY controls whether Celigo maintains internal copies of the original files.\n"},"decrypt":{"type":"string","description":"Specifies the decryption method to apply to incoming files before processing. This setting enables handling of encrypted files that require decryption before their contents can be parsed.\n\n**Supported encryption**\n\n**Pgp/gpg Encryption (`\"pgp\"`)**\n- **File Extensions**: Typically .pgp, .gpg, or .asc\n- **Encryption Standard**: OpenPGP (RFC 4880)\n- **Key Requirements**: Private key must be configured on the connection\n- **Common Sources**: Secure file transfers, encrypted backups, confidential data exchanges\n\n**Implementation requirements**\n\n1. **Connection Configuration Prerequisites**:\n    - This field assumes the connection has already been configured with appropriate cryptographic settings\n    - Private key must be uploaded to the connection configuration\n    - Passphrase (if applicable) must be configured on the connection\n    - For asymmetric encryption, the corresponding public key must have been used to encrypt the files\n\n2. **File Processing Flow**:\n    - Encrypted files are first retrieved from the source\n    - Decryption is applied using the configured connection's cryptographic settings\n    - After successful decryption, normal file processing continues (parsing, filtering, etc.)\n    - If decryption fails, the file processing will error out completely\n\n**Guidance for ai agents**\n\n**When to Configure Decryption**\n\n1. **Security Requirements**:\n    - Set to \"pgp\" when source files are PGP/GPG encrypted\n    - Required for end-to-end encrypted data transfers\n    - Common in financial, healthcare, and other industries with sensitive data\n    - Essential for compliance with certain data protection regulations\n\n2. **Technical Indicators**:\n    - File extensions indicate encryption (.pgp, .gpg, .asc)\n    - Source system documentation mentions PGP encryption\n    - Files cannot be opened with standard text editors\n    - Source system provides a public key for encryption\n\n**Implementation Considerations**\n\n- **Key Management**: Ensure private keys are securely stored and properly configured\n- **Error Handling**: Decryption failures will cause the entire file processing to fail\n- **Performance Impact**: Decryption adds processing overhead before file parsing begins\n- **Debugging Challenges**: Encrypted files cannot be easily examined for troubleshooting\n\n**Security Best Practices**\n\n- **Key Rotation**: Recommend periodic key rotation according to security policies\n- **Passphrase Protection**: Use strong passphrases for private keys when possible\n- **Access Control**: Limit access to connections with decryption capabilities\n- **Audit Logging**: Enable detailed logging for decryption operations when available\n\n**Integration with other settings**\n\n- If files are both encrypted AND compressed, decryption happens before decompression\n- Subsequent processing (based on file type settings) occurs after decryption\n- Internal backups (controlled by purgeInternalBackup) store the decrypted files unless configured otherwise\n\nCurrently, only PGP/GPG encryption is supported. For other encryption methods, custom preprocessing may be required.\n","enum":["pgp"]},"batchSize":{"type":"integer","description":"Controls the number of files processed in a single batch operation. This setting allows fine-tuning of performance and resource utilization during file processing.\n\n**Behavior and purpose**\n\n- **Function**: Limits the number of files processed in a single batch request\n- **Default**: If not specified, the system uses a default batch size based on file type\n- **Maximum**: 1000 files per batch (hard system limit)\n- **Impact**: Affects performance, memory usage, and error resilience, but NOT total processing capacity\n\n**Performance optimization guidance**\n\n**Large File Optimization (Set Lower Values: 10-50)**\n\nWhen working with large files (>10MB each), smaller batch sizes are recommended:\n\n- **Network Benefits**: Reduces timeout risks during file transfer\n- **Memory Usage**: Prevents excessive memory consumption\n- **Error Isolation**: Limits the impact of processing failures\n- **Example Scenarios**: Document processing, image files, complex spreadsheets\n\n```\n\"batchSize\": 20  // Good setting for large PDF or image files\n```\n\n**Small File Optimization (Set Higher Values: 100-1000)**\n\nWhen working with small files (<1MB each), larger batch sizes improve efficiency:\n\n- **Throughput**: Processes more files with less overhead\n- **API Efficiency**: Reduces the number of API calls\n- **Resource Utilization**: Maximizes processing efficiency\n- **Example Scenarios**: Small CSV files, transaction records, simple data files\n\n```\n\"batchSize\": 500  // Efficient for small data files\n```\n\n**Implementation guidance for ai agents**\n\n**Recommendation Framework**\n\n1. **File Size Assessment**:\n    - For files averaging >10MB: Recommend 10-20\n    - For files averaging 1-10MB: Recommend 20-100\n    - For files averaging <1MB: Recommend 100-500\n    - For very small files (<100KB): Consider maximum (1000)\n\n2. **Reliability Factors**:\n    - For critical data with no retry capability: Recommend lower values\n    - For unstable network connections: Recommend lower values\n    - For production environments: Start conservative (lower) and increase based on performance\n    - For development/testing: Can use higher values for efficiency\n\n3. **System Constraints**:\n    - Consider available memory in the integration environment\n    - Evaluate network bandwidth and stability\n    - Account for source system rate limits or concurrent connection limits\n\n**Technical considerations**\n\n- **Error Handling**: If a batch fails, only that batch is retried (not individual files)\n- **Parallelism**: Batch size affects concurrent processing but within system limits\n- **Monitoring**: Larger batch sizes make monitoring individual file progress more difficult\n- **Resource Scaling**: Higher batch sizes require more memory but can complete faster\n\n**Relationship to other settings**\n\n- This setting controls file retrieval batching, not record processing batch size\n- Works in conjunction with compression and decryption settings\n- Separate from and complementary to the main flow's pageSize setting\n\nConsider starting with more conservative (lower) values and increasing based on performance monitoring.\n","maximum":1000},"sortByFields":{"type":"array","description":"Allows you to sort all records in a file before processing them. This configuration enables deterministic ordering of records, which can be critical for maintaining data consistency and enabling specific processing patterns.\n\n**Functionality overview**\n\n- **Purpose**: Establishes a specific processing order for records within files\n- **Timing**: Sorting is applied after file parsing but before any filtering or grouping\n- **Scope**: Affects only the in-memory representation of records (doesn't modify source files)\n- **Performance**: Has computational cost proportional to number of records × log(number of records)\n\n**Strategic uses for ai agents**\n\n**Business Process Optimization**\n\n1. **Chronological Processing**:\n    - Sort by date/timestamp fields to process events in time order\n    - Essential for financial transactions, audit logs, event sequences\n    - Example: `[{\"field\": \"transactionDate\", \"descending\": false}]`\n\n2. **Hierarchical Data Handling**:\n    - Sort by parent records before children\n    - Ensures referential integrity in relational data\n    - Example: `[{\"field\": \"parentId\", \"descending\": false}, {\"field\": \"lineNumber\", \"descending\": false}]`\n\n3. **Priority-Based Processing**:\n    - Sort by importance/priority fields to handle critical items first\n    - Useful for SLA-driven processes, tiered operations\n    - Example: `[{\"field\": \"priority\", \"descending\": true}, {\"field\": \"createdDate\", \"descending\": false}]`\n\n**Technical Optimization**\n\n1. **Grouping Efficiency**:\n    - Sorting by the same fields used in groupByFields improves grouping performance\n    - Reduces memory usage when processing large files\n    - Example: `[{\"field\": \"customerId\", \"descending\": false}]` with corresponding groupByFields\n\n2. **Lookup Optimization**:\n    - Sorting by reference fields enhances performance of subsequent lookups\n    - Minimizes database calls by enabling batch lookups\n    - Example: `[{\"field\": \"productSku\", \"descending\": false}]`\n\n3. **Error Reduction**:\n    - Sorting can ensure dependencies are processed in correct order\n    - Reduces failures from out-of-sequence processing\n    - Example: `[{\"field\": \"sequenceNumber\", \"descending\": false}]`\n\n**Implementation guidance**\n\n**Field Selection Considerations**\n\n- **Data Type Compatibility**: Fields must contain comparable values (dates, numbers, strings)\n- **Nulls Handling**: Null values are typically sorted last (after all non-null values)\n- **Nested Fields**: Use dot notation for accessing nested properties (`customer.region`)\n- **Performance Impact**: Each additional sort field increases computational cost\n\n**Common Implementation Patterns**\n\n```json\n// Simple single-field ascending sort (most common)\n[\n  {\"field\": \"orderDate\", \"descending\": false}\n]\n\n// Multi-field sort with primary and secondary criteria\n[\n  {\"field\": \"region\", \"descending\": false},\n  {\"field\": \"revenue\", \"descending\": true}\n]\n\n// Descending priority sort with tie-breaker\n[\n  {\"field\": \"priority\", \"descending\": true},\n  {\"field\": \"createdDate\", \"descending\": false}\n]\n```\n\n**Limitations and Constraints**\n\n- Sorting large datasets has memory implications; consider record volume\n- Maximum recommended number of sort fields: 3-5 (performance considerations)\n- Sorting effectiveness depends on data consistency in source files\n- Complex sorting logic might be better implemented in custom scripts\n","items":{"type":"object","properties":{"field":{"type":"string","description":"Specifies the record field to use as a sort key. This field name identifies which property of each record will be used for comparison when establishing processing order.\n\n**Field selection guidelines**\n\n**Data Type Considerations**\n\n- **Date/Time Fields**: Provide chronological sorting (`createdDate`, `timestamp`)\n- **Numeric Fields**: Enable quantitative ordering (`amount`, `sequenceNumber`, `priority`)\n- **String Fields**: Sort alphabetically (`name`, `status`, `category`)\n- **Boolean Fields**: Group records by true/false values (`isActive`, `isProcessed`)\n\n**Accessing Field Paths**\n\n- **Top-level Properties**: Direct field names (`orderNumber`, `date`)\n- **Nested Objects**: Use dot notation (`customer.name`, `address.country`)\n- **Array Elements**: Not directly supported in basic sorting; use preprocessing\n\n**Common Field Patterns by Domain**\n\n1. **Order Processing**:\n    - `orderDate`, `orderNumber`, `customerId`, `lineNumber`\n\n2. **Financial Data**:\n    - `transactionDate`, `accountNumber`, `amount`, `documentNumber`\n\n3. **Customer Records**:\n    - `lastName`, `firstName`, `customerType`, `region`\n\n4. **Inventory/Products**:\n    - `productCategory`, `itemNumber`, `stockLevel`, `reorderDate`\n\n5. **Event Logs**:\n    - `timestamp`, `severity`, `eventType`, `sourceSystem`\n\n**Implementation notes**\n\n- Field names are case-sensitive\n- Fields must exist in all records (or have consistent representation when missing)\n- Non-existent fields or null values are typically sorted last\n- Maximum recommended field name length: 64 characters\n"},"descending":{"type":"boolean","description":"Controls the sort direction for the specified field. This setting determines whether records will be arranged in ascending (lowest to highest) or descending (highest to lowest) order.\n\n**Behavior**\n\n- **When false or omitted**: Sorts in ascending order (A→Z, 0→9, oldest→newest)\n- **When true**: Sorts in descending order (Z→A, 9→0, newest→oldest)\n\n**Strategic direction selection**\n\n**Ascending Order (descending: false)**\n\nRecommended for:\n- Chronological event processing (earliest first)\n- Sequential operations with dependencies\n- Reference data that builds on previous records\n- Incremental ID or sequence numbers\n\nExample use cases:\n- Processing dated transactions in chronological order\n- Handling items in order of creation\n- Incrementally building state that depends on previous records\n\n**Descending Order (descending: true)**\n\nRecommended for:\n- Priority-based processing (highest first)\n- Recent-first temporal processing\n- Most significant items first\n- Limited processing where only top N items matter\n\nExample use cases:\n- Processing high-priority items before low-priority\n- Handling most recent updates first\n- Focusing on highest-value transactions first\n\n**Implementation patterns**\n\n**Single Field Direction**\n\n```json\n{\"field\": \"createdDate\", \"descending\": false}  // Oldest first\n{\"field\": \"createdDate\", \"descending\": true}   // Newest first\n```\n\n**Mixed Directions in Multi-field Sorts**\n\n```json\n// Group by category (A→Z) but show highest priority first in each category\n[\n  {\"field\": \"category\", \"descending\": false},\n  {\"field\": \"priority\", \"descending\": true}\n]\n```\n\n**Technical considerations**\n\n- Default value is `false` if omitted (ascending sort)\n- For date fields, ascending means oldest first\n- For numeric fields, ascending means smallest first\n- For string fields, ascending means alphabetical order\n"}}}},"groupByFields":{"$ref":"#/components/schemas/GroupBy"},"csv":{"type":"object","description":"Configuration settings for parsing CSV (Comma-Separated Values) files. This object defines how the system interprets delimited text files, handling variations in format, structure, and content.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"csv\". This configuration is required for properly parsing:\n- Standard CSV files (.csv)\n- Tab-delimited files (.tsv, .tab)\n- Other character-delimited files (semicolon, pipe, etc.)\n- Fixed-width text files converted to delimited format\n\n**Implementation strategy for ai agents**\n\n1. **Format Analysis**:\n    - Examine sample files to identify delimiter pattern\n    - Check for presence/absence of header row\n    - Look for whitespace or quote pattern inconsistencies\n    - Identify any rows that should be skipped (headers, metadata, etc.)\n\n2. **Configuration Priority**:\n    - `columnDelimiter`: Most critical setting; incorrect delimiter causes parsing failures\n    - `hasHeaderRow`: Affects field mapping and identification\n    - `rowDelimiter`: Usually auto-detected but important for non-standard files\n    - `trimSpaces`: Important for inconsistent formatting\n    - `rowsToSkip`: Necessary when files contain metadata/comments before data\n\n3. **Common File Source Patterns**:\n\n    | Source System | Typical Delimiter | Header Row | Common Issues |\n    |--------------|-------------------|------------|---------------|\n    | Excel (US)   | Comma (,)         | Yes        | Quoted fields with embedded commas |\n    | Excel (EU)   | Semicolon (;)     | Yes        | Decimal separator conflicts |\n    | Legacy Systems | Pipe (\\|) or Tab | Varies     | Inconsistent field counts |\n    | POS Systems  | Comma or Tab      | Often No   | Trailing delimiters |\n    | ERP Exports  | Varies widely     | Usually Yes| Fixed field counts with padding |\n\n**Error prevention**\n\n- **Misaligned Columns**: Usually caused by incorrect delimiter or quotes handling\n- **Truncated Data**: Can result from wrong row delimiter settings\n- **Field Misinterpretation**: Often caused by incorrect header row setting\n- **Character Encoding Issues**: Address with the parent `encoding` setting\n- **Whitespace Problems**: Resolve with `trimSpaces` setting\n\n**Optimization opportunities**\n\n- For maximum parsing speed, set only the minimal required settings\n- For problematic files with inconsistent formatting, use more restrictive settings\n- Balance between permissive parsing (more data accepted) and strict validation (cleaner data)\n","properties":{"columnDelimiter":{"type":"string","description":"Specifies the character sequence that separates individual fields (columns) within each row of the CSV file.\n\n**Behavior**\n\n- Controls how the parser identifies individual fields in each row\n- Default value: comma (,) if not specified\n- Special characters may need to be escaped\n\n**Common delimiter patterns**\n\n**Standard csv (`,`)**\n```\n\"columnDelimiter\": \",\"\n```\n- Most common format in US/UK systems\n- Default for most spreadsheet exports\n- Used by: Microsoft Excel (US), Google Sheets, many database exports\n\n**European csv (`;`)**\n```\n\"columnDelimiter\": \";\"\n```\n- Common in European locales where comma is the decimal separator\n- Standard format in many EU countries\n- Used by: Microsoft Excel (many EU locales), European business systems\n\n**Tab-Delimited (`\\t`)**\n```\n\"columnDelimiter\": \"\\t\"\n```\n- Used for tab-separated values (TSV) files\n- Better for data containing commas\n- Used by: Database exports, scientific data, legacy systems\n\n**Other Common Delimiters**\n- Pipe: `\"columnDelimiter\": \"|\"` (used in mainframes, legacy systems)\n- Colon: `\"columnDelimiter\": \":\"` (less common, specialized formats)\n- Space: `\"columnDelimiter\": \" \"` (uncommon, problematic with text fields)\n\n**Determination strategy for ai agents**\n\n1. **File Extension Check**:\n    - .csv → Usually comma (,)\n    - .tsv → Always tab (\\t)\n    - .txt → Could be any delimiter; needs inspection\n\n2. **Source System Analysis**:\n    - EU-based systems often use semicolon (;)\n    - Legacy/mainframe systems often use pipe (|)\n    - Scientific/statistical data often uses tab (\\t)\n\n3. **File Content Inspection**:\n    - Open file in text editor to identify separating character\n    - Check for character frequency patterns\n    - Look for consistent character between data elements\n\n4. **System Documentation**:\n    - Check export settings in source system\n    - Review file specifications if available\n\n**Implementation notes**\n\n- For tab delimiter, use `\"\\t\"` (escape sequence for tab)\n- If file contains the delimiter within text fields, ensure proper quoting\n- Multi-character delimiters are supported but rare\n- Setting the wrong delimiter is the most common parsing error\n"},"rowDelimiter":{"type":"string","description":"Specifies the character sequence that indicates the end of each record (row) in the CSV file.\n\n**Behavior**\n\n- Controls how the parser identifies the boundaries between records\n- Default: Auto-detect (system attempts to determine from file content)\n- Common values: newline (`\\n`), carriage return + newline (`\\r\\n`)\n\n**Common row delimiter patterns**\n\n**Windows-Style (`\\r\\n`)**\n```\n\"rowDelimiter\": \"\\r\\n\"\n```\n- CRLF (Carriage Return + Line Feed) sequence\n- Standard for files created on Windows systems\n- Used by: Microsoft Office, Windows-based applications\n\n**Unix-Style (`\\n`)**\n```\n\"rowDelimiter\": \"\\n\"\n```\n- LF (Line Feed) character only\n- Standard for files created on Unix/Linux/macOS (modern) systems\n- Used by: Linux applications, macOS applications, web exports\n\n**Classic Mac-Style (`\\r`)**\n```\n\"rowDelimiter\": \"\\r\"\n```\n- CR (Carriage Return) character only\n- Legacy format used by older Mac systems (pre-OSX)\n- Rare in modern files but still found in some legacy systems\n\n**When to specify explicitly**\n\nIn most cases, the auto-detection works well, but explicitly set this when:\n\n1. **Mixed Line Endings**: Files containing inconsistent line ending styles\n2. **Custom Record Separators**: Files using unconventional record delimiters\n3. **Parsing Errors**: When auto-detection fails to correctly separate records\n4. **Performance Optimization**: To avoid detection overhead in high-volume processing\n\n**Determination strategy for ai agents**\n\n1. **Source System Analysis**:\n    - Windows systems typically use `\\r\\n`\n    - Unix/Linux/macOS typically use `\\n`\n    - Web downloads could use either format\n\n2. **Troubleshooting Guidance**:\n    - If records are merged or split incorrectly, check for proper row delimiter\n    - If file opens correctly in text editor but parsing fails, row delimiter may be the issue\n    - For files with unusual record counts, examine row delimiter setting\n\n**Implementation notes**\n\n- Use escape sequences (`\\n`, `\\r\\n`, `\\r`) to represent control characters\n- Setting incorrect row delimiter may result in merged records or split records\n- When in doubt, leave unspecified to use auto-detection\n- Multi-character delimiters beyond standard line endings are supported but rare\n"},"hasHeaderRow":{"type":"boolean","description":"Indicates whether the CSV file contains a header row with field names as the first row.\n\n**Behavior**\n\n- **When true** (default): First row is treated as field names, not data\n- **When false**: All rows including the first are treated as data records\n- Impacts field mapping, validation, and record counting\n\n**Implementation impact**\n\n**With Header Row (true)**\n\n- Field names from the header row can be referenced in mappings\n- Record count excludes the header row\n- First row of data is the second physical row in the file\n- Provides self-documenting data structure\n\n**Without Header Row (false)**\n\n- Fields are referenced by position/index (e.g., Column1, Column2)\n- Record count includes all rows in the file\n- First row of data is the first physical row in the file\n- Requires external schema or position-based mapping\n\n**Determination strategy for ai agents**\n\n1. **Visual Inspection**:\n    - Check if the first row contains descriptive labels rather than actual data\n    - Look for data type consistency (headers are typically text, while data may be mixed)\n    - Headers often use camelCase, PascalCase, or snake_case formatting\n\n2. **Source System Analysis**:\n    - Most business systems include headers by default\n    - Legacy/mainframe systems may omit headers\n    - Data extracts intended for human use typically include headers\n\n3. **Content Patterns**:\n    - Headers typically don't match the pattern of subsequent data rows\n    - Headers often contain special characters not found in data (spaces, symbols)\n    - Data rows typically have consistent patterns while headers may differ\n\n**Common configurations by source**\n\n| Source Type | Typical Setting | Notes |\n|-------------|-----------------|-------|\n| Business Reports | true | Headers provide field context |\n| Database Exports | true | Column names as headers |\n| Legacy System Feeds | false | Often position-based fixed formats |\n| IoT/Sensor Data | false | Often compact, headerless formats |\n| Manual Data Entry | true | Helps maintain field alignment |\n\n**Best practices**\n\n- Always explicitly set this value rather than relying on the default\n- For data without headers, consider adding them in preprocessing if possible\n- When headers exist but should be ignored, use `hasHeaderRow: true` and `rowsToSkip: 1`\n- Document field positions when working with headerless files\n"},"trimSpaces":{"type":"boolean","description":"Controls whether leading and trailing whitespace should be removed from field values during parsing.\n\n**Behavior**\n\n- **When true**: Removes all leading and trailing whitespace from each field value\n- **When false** (default): Preserves all whitespace in field values exactly as in the source\n- Applies to data fields only; header row values are always trimmed regardless of this setting\n\n**Implementation impact**\n\n**With Trimming Enabled (true)**\n\n- More consistent data for comparison and matching operations\n- Prevents issues with invisible whitespace affecting equality checks\n- Reduces storage space for text-heavy datasets\n- Helps normalize data from inconsistent sources\n\n**With Trimming Disabled (false)**\n\n- Preserves exact data as represented in the source file\n- Required when whitespace is semantically meaningful\n- Maintains original field lengths exactly\n- Necessary for certain data validation scenarios\n\n**Usage guidance for ai agents**\n\n**Recommend `trimSpaces: true` when**\n\n1. **Data Consistency Issues**:\n    - Source systems are known to have inconsistent spacing\n    - Data will be used for matching or comparison operations\n    - Files are generated by multiple different systems\n    - Human-entered data is present (prone to spacing errors)\n\n2. **Data Type Considerations**:\n    - Fields contain numeric values (where spaces are not meaningful)\n    - Fields contain codes, IDs, or reference values\n    - Fields will be used in lookups or joins\n    - Normalization is more important than exact representation\n\n**Recommend `trimSpaces: false` when**\n\n1. **Data Fidelity Requirements**:\n    - Working with fixed-width fields where spaces matter\n    - Dealing with formatted data where spacing is semantic\n    - Legal or compliance scenarios requiring exact preservation\n    - Scientific data where precision of representation matters\n\n2. **Content Characteristics**:\n    - Working with text fields where leading/trailing spaces could be intentional\n    - Processing creative content, addresses, or formatted text\n    - Source system uses space padding for alignment purposes\n\n**Implementation notes**\n\n- This setting affects all fields consistently (cannot be applied to select fields)\n- Only affects leading and trailing spaces, not spaces between words\n- Has no effect on empty fields (empty remains empty)\n- For selective trimming, use transformation rules after parsing\n"},"rowsToSkip":{"type":"integer","description":"Specifies the number of rows at the beginning of the file to ignore before starting data processing.\n\n**Behavior**\n\n- Skips the specified number of rows from the beginning of the file\n- These rows are completely ignored and not processed as data\n- The header row (if present) is counted after the skipped rows\n- Default value is 0 (no rows skipped)\n\n**Implementation impact**\n\n**Common Use Cases**\n\n1. **Metadata Headers**:\n    - Skip report titles, generated timestamps, system information\n    - Skip explanatory text at the beginning of files\n    - Skip company letterhead or report identification rows\n\n2. **Multi-Header Files**:\n    - Skip category headers or section titles\n    - Skip nested headers or hierarchy information\n    - Skip column grouping indicators\n\n3. **Technical Requirements**:\n    - Skip binary file markers or encoding identifiers\n    - Skip non-data content like instructions or disclaimers\n    - Skip inconsistent early rows before standardized data begins\n\n**Calculation guidance for ai agents**\n\nWhen determining the correct value for `rowsToSkip`:\n\n1. **Count from Zero**:\n    - Row 1 = 0, Row 2 = 1, Row 3 = 2, etc.\n\n2. **For Files with Headers**:\n    - Set rowsToSkip = (first data row position - 1) - (hasHeaderRow ? 1 : 0)\n    - Example: If data starts on row 5, and file has a header row:\n      rowsToSkip = (5 - 1) - 1 = 3\n\n3. **For Files without Headers**:\n    - Set rowsToSkip = (first data row position - 1)\n    - Example: If data starts on row 3, and file has no header row:\n      rowsToSkip = (3 - 1) = 2\n\n**Determination strategy**\n\n1. **Visual Inspection**:\n    - Open file in text editor and count non-data rows at the top\n    - Identify the first row containing actual data values\n    - Note if a header row exists separately from skipped content\n\n2. **Common Patterns by Source**:\n    - ERP Reports: Often 2-5 rows of report metadata\n    - Exported Spreadsheets: May have title rows, date stamps\n    - Database Extracts: Usually minimal (0-1) skipped rows\n    - Legacy Systems: May have control records or job information\n\n**Implementation notes**\n\n- Setting too high skips valid data; setting too low includes non-data as records\n- When in doubt, visually inspect the file to confirm correct skip count\n- Remember that header row (if hasHeaderRow=true) is counted AFTER skipped rows\n- Maximum recommended value: 100 (larger values may indicate format misunderstanding)\n"},"disableQuoteAndStripEnclosingQuotes":{"type":"boolean","description":"Controls the handling of quoted fields in CSV files, specifically how the parser manages quotation marks around field values.\n\n**Behavior**\n\n- **When false** (default): Standard CSV quoting rules are applied\n    - Quotation marks around fields protect embedded delimiters\n    - Parser intelligently handles escaped quotes within quoted fields\n    - Follows RFC 4180 CSV specifications for quote handling\n\n- **When true**: Quote detection and processing is disabled\n    - All quotes are treated as literal characters, not field delimiters\n    - Any quotes surrounding field values are removed\n    - Embedded delimiters in quoted fields will cause field splitting\n\n**Implementation impact**\n\n**Standard Quote Handling (false)**\n\nExample input: `\"Smith, John\",42,\"Notes with \"\"quotes\"\" inside\"`\n\nResult:\n- Field 1: `Smith, John` (comma preserved inside quotes)\n- Field 2: `42`\n- Field 3: `Notes with \"quotes\" inside` (embedded quotes normalized)\n\n**Disabled Quote Handling (true)**\n\nExample input: `\"Smith, John\",42,\"Notes with \"\"quotes\"\" inside\"`\n\nResult:\n- Field 1: `\"Smith`\n- Field 2: ` John\"`\n- Field 3: `42`\n- Field 4: `\"Notes with \"\"quotes\"\" inside\"`\n\n**Usage guidance for ai agents**\n\n**Recommend `disableQuoteAndStripEnclosingQuotes: true` when**\n\n1. **Quote-Related Parsing Problems**:\n    - Files contain inconsistent or malformed quote usage\n    - Source system doesn't follow standard CSV quoting rules\n    - Quotes appear as literal data rather than field delimiters\n    - Quotes are present but delimiters are never embedded in fields\n\n2. **Special Data Formats**:\n    - Working with custom delimited formats that don't use quotes for escaping\n    - Files use alternate escaping mechanisms for embedded delimiters\n    - Source system adds quotes to all fields regardless of content\n\n**Recommend `disableQuoteAndStripEnclosingQuotes: false` (default) when**\n\n1. **Standard CSV Compliance**:\n    - Files follow RFC 4180 or similar CSV standards\n    - Fields contain embedded delimiters that must be preserved\n    - Quotes are used properly to enclose fields with special characters\n    - Source is a standard database, spreadsheet, or business system export\n\n2. **Data Content Characteristics**:\n    - Fields contain embedded commas, newlines, or other delimiters\n    - Text fields might contain quotation marks as part of the content\n    - Preserving the exact structure of complex text fields is important\n\n**Troubleshooting indicators**\n\nConsider changing this setting when encountering these issues:\n\n- Field counts vary unexpectedly between rows\n- Text with embedded delimiters is being split into multiple fields\n- Quotes appearing at the beginning and end of every field in the result\n- Extra quote characters appearing within field values\n\n**Implementation notes**\n\n- This setting significantly changes parsing behavior - test thoroughly\n- Affects all fields in the file consistently\n- Incorrect setting can cause severe data misalignment\n- When field count inconsistency occurs, review this setting first\n"}}},"json":{"type":"object","description":"Configuration settings for parsing JSON (JavaScript Object Notation) files. This object defines how the system interprets and processes hierarchical data contained in JSON-formatted files.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"json\". This configuration is required for properly parsing:\n- Standard JSON files (.json)\n- JSON data exports from APIs or databases\n- JSON Lines format (newline-delimited JSON)\n- Nested or hierarchical data structures\n\n**Json parsing characteristics**\n\n- **Hierarchical Data**: JSON naturally supports nested objects and arrays\n- **Type Preservation**: Numbers, booleans, nulls, and strings are correctly typed\n- **Flexible Structure**: Can handle varying record structures\n- **Tree Navigation**: Supports complex object traversal via path expressions\n\n**Implementation strategy for ai agents**\n\n1. **Data Structure Analysis**:\n    - Examine sample files to understand the object hierarchy\n    - Identify where the actual records/rows are located in the structure\n    - Determine if records are at the root or nested within containers\n    - Check for array structures that contain the target records\n\n2. **Common JSON Data Patterns**:\n\n    **Root Array Pattern**\n    ```json\n    [\n      {\"id\": 1, \"name\": \"Product 1\"},\n      {\"id\": 2, \"name\": \"Product 2\"}\n    ]\n    ```\n    - Records are directly at the root as an array\n    - No resourcePath needed (leave blank)\n    - Most straightforward structure for processing\n\n    **Container Object Pattern**\n    ```json\n    {\n      \"data\": [\n        {\"id\": 1, \"name\": \"Product 1\"},\n        {\"id\": 2, \"name\": \"Product 2\"}\n      ],\n      \"metadata\": {\n        \"count\": 2,\n        \"page\": 1\n      }\n    }\n    ```\n    - Records are in an array inside a container object\n    - Requires resourcePath (e.g., \"data\")\n    - Common in API responses with metadata\n\n    **Nested Container Pattern**\n    ```json\n    {\n      \"response\": {\n        \"results\": [\n          {\"id\": 1, \"name\": \"Product 1\"},\n          {\"id\": 2, \"name\": \"Product 2\"}\n        ],\n        \"pagination\": {\n          \"nextPage\": 2\n        }\n      },\n      \"status\": \"success\"\n    }\n    ```\n    - Records are deeply nested in the hierarchy\n    - Requires dot notation in resourcePath (e.g., \"response.results\")\n    - Common in complex API responses\n\n**Error prevention**\n\n- **Invalid Path**: Incorrectly specified resourcePath results in zero records found\n- **Type Mismatch**: resourcePath must point to an array of objects for proper record processing\n- **Empty Results**: If path resolves to null or non-existent field, no error is thrown but no records are processed\n- **Parsing Failures**: Malformed JSON will cause the entire file processing to fail\n\n**Optimization opportunities**\n\n- For large JSON files, consider preprocessing to extract only relevant sections\n- For files with complex structures, validate the resourcePath with sample data\n- When processing API responses, coordinate resourcePath with the API documentation\n- For very large datasets, consider using streaming JSON parsing (NDJSON format)\n","properties":{"resourcePath":{"type":"string","description":"Specifies the path to the array of records within the JSON structure. This field helps the system locate and extract the target records when they're nested inside a larger JSON object hierarchy.\n\n**Behavior**\n\n- **Purpose**: Identifies where the array of records is located in the JSON structure\n- **Format**: Dot notation path to navigate nested objects (e.g., \"data.records\")\n- **When Empty**: System expects records to be at the root level as an array\n- **Result**: Array found at this path is processed as individual records\n\n**Path notation guidelines**\n\n**Basic Path Patterns**\n\n- **Root Level Array**: Leave empty or null when records are a direct array at root\n- **Single Level Nesting**: Use the property name (e.g., \"data\", \"results\", \"items\")\n- **Multi-Level Nesting**: Use dot notation (e.g., \"response.data.items\")\n\n**Path Construction Rules**\n\n1. **Object Navigation**:\n    - Use dots to traverse object properties: \"parent.child.grandchild\"\n    - Each segment must be a valid property name in the JSON\n\n2. **Target Requirements**:\n    - The path MUST resolve to an array of objects\n    - Each object in the array will be processed as one record\n    - The array must be the final element in the path\n\n3. **Limitations**:\n    - Array indexing is not supported in the path (e.g., \"data[0]\")\n    - Wildcard selectors are not supported\n    - Regular expressions are not supported\n\n**Determination strategy for ai agents**\n\nTo identify the correct resourcePath:\n\n1. **Examine Sample Data**:\n    - Open a sample JSON file or API response\n    - Locate the array containing the actual data records\n    - Note the full path from root to this array\n\n2. **Common Patterns by Source**:\n\n    | Source Type | Common Paths | Example |\n    |-------------|--------------|---------|\n    | REST APIs | \"data\", \"results\", \"items\" | \"data\" |\n    | Complex APIs | \"response.data\", \"data.items\" | \"response.data\" |\n    | Database Exports | \"rows\", \"records\", \"exports\" | \"rows\" |\n    | CRM Systems | \"contacts\", \"accounts\", \"opportunities\" | \"contacts\" |\n    | Analytics APIs | \"data.rows\", \"response.data.rows\" | \"data.rows\" |\n\n3. **Verification Approach**:\n    - The path should resolve to an array (typically with square brackets in the JSON)\n    - Each element in this array should represent one complete record\n    - The array should not be a property array (like tags or categories)\n\n**Implementation examples**\n\n**Root Array (No Path Needed)**\n\nJSON Structure:\n```json\n[\n  {\"id\": 1, \"name\": \"Record 1\"},\n  {\"id\": 2, \"name\": \"Record 2\"}\n]\n```\n\nConfiguration:\n```json\n\"resourcePath\": \"\"  // or omit entirely\n```\n\n**Single-Level Nesting**\n\nJSON Structure:\n```json\n{\n  \"orders\": [\n    {\"id\": \"A001\", \"customer\": \"John\"},\n    {\"id\": \"A002\", \"customer\": \"Jane\"}\n  ],\n  \"count\": 2\n}\n```\n\nConfiguration:\n```json\n\"resourcePath\": \"orders\"\n```\n\n**Multi-Level Nesting**\n\nJSON Structure:\n```json\n{\n  \"response\": {\n    \"data\": {\n      \"customers\": [\n        {\"id\": 1, \"name\": \"Acme Corp\"},\n        {\"id\": 2, \"name\": \"Globex Inc\"}\n      ]\n    },\n    \"status\": \"success\"\n  }\n}\n```\n\nConfiguration:\n```json\n\"resourcePath\": \"response.data.customers\"\n```\n\n**Troubleshooting indicators**\n\nIf you encounter these issues, review the resourcePath setting:\n\n- Export completes successfully but processes 0 records\n- \"Cannot read property 'forEach' of undefined\" errors\n- \"Expected array but got object/string/number\" errors\n- Records appear flattened or with unexpected structure\n\n**Best practices**\n\n- Always verify the path with sample data before deployment\n- Use the simplest path that reaches the target array\n- Document the expected JSON structure alongside the configuration\n- For APIs with changing response structures, implement validation checks\n"}}},"xlsx":{"type":"object","description":"Configuration settings for parsing Microsoft Excel (XLSX) files. This object defines how the system interprets and extracts data from Excel workbooks, handling their unique structures and formatting.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"xlsx\". This configuration is required for properly parsing:\n- Modern Excel files (.xlsx) using the Open XML format\n- Excel workbooks with multiple sheets\n- Files exported from Microsoft Excel or compatible applications\n- Spreadsheet data with formatting, formulas, or multiple worksheets\n\n**Excel parsing characteristics**\n\n- **Multiple Worksheets**: Can access data from specific sheets within workbooks\n- **Cell Formatting**: Handles various data types (text, numbers, dates, etc.)\n- **Formula Resolution**: Retrieves calculated values rather than formulas\n- **Data Extraction**: Converts tabular Excel data to structured records\n\n**Implementation strategy for ai agents**\n\n1. **File Analysis**:\n    - Determine if the source file is an actual .xlsx format (not .xls, .csv, etc.)\n    - Identify which worksheet contains the target data\n    - Check for header rows, merged cells, or other special formatting\n    - Note any preprocessing required (hidden rows, filtered data, etc.)\n\n2. **Common Excel File Patterns**:\n\n    **Standard Data Table**\n    - Data organized in clear rows and columns\n    - First row contains headers\n    - No merged cells or complex formatting\n    - Most straightforward to process\n\n    **Report-Style Workbook**\n    - Contains titles, headers, and possibly footers\n    - May have merged cells for headings\n    - Could have multiple tables on a single sheet\n    - May require specific sheet selection or row skipping\n\n    **Multi-Sheet Workbook**\n    - Data distributed across multiple worksheets\n    - May require multiple export configurations\n    - Often needs sheet name specification (via pre-processing)\n    - Common in financial or complex business reports\n\n**Limitations and considerations**\n\n- **Hidden Data**: Hidden rows/columns are still processed unless filtered\n- **Formatting Loss**: Visual formatting and styles are ignored\n- **Formula Handling**: Only calculated values are extracted, not formulas\n- **Non-Tabular Data**: Pivot tables and non-tabular layouts may cause issues\n- **Large Files**: Very large Excel files may require additional memory\n\n**Error prevention**\n\n- **Format Compatibility**: Ensure the file is modern .xlsx format, not legacy .xls\n- **Data Structure**: Verify data is in a consistent tabular format\n- **Special Characters**: Watch for special characters in header rows\n- **Empty Sheets**: Check that target worksheets contain actual data\n\n**Optimization opportunities**\n\n- For complex workbooks, consider pre-processing to simplify structure\n- For large files, extract only necessary worksheets/ranges before processing\n- When possible, use files with consistent tabular layouts\n- Consider converting Excel data to CSV format for simpler processing\n","properties":{"hasHeaderRow":{"type":"boolean","description":"Indicates whether the Excel file contains a header row with field names as the first row.\n\n**Behavior**\n\n- **When true** (default): First row is treated as field names, not data\n- **When false**: All rows including the first are treated as data records\n- Impacts field mapping, validation, and record counting\n\n**Implementation impact**\n\n**With Header Row (true)**\n\n- Field names from the header row can be referenced in mappings\n- Record count excludes the header row\n- First row of data is the second physical row in the spreadsheet\n- Column names are derived from the first row text values\n- Blank header cells may be auto-named (Column1, Column2, etc.)\n\n**Without Header Row (false)**\n\n- Fields are referenced by position/Excel column letters (A, B, C, etc.)\n- Record count includes all rows in the sheet\n- First row of data is the first physical row in the spreadsheet\n- Requires external schema or position-based mapping\n- All fields are given generic names (Column1, Column2, etc.)\n\n**Determination strategy for ai agents**\n\nTo determine if a header row exists and should be configured:\n\n1. **Visual Inspection**:\n    - Open the Excel file and examine the first row\n    - Look for descriptive labels rather than actual data values\n    - Check for formatting differences between the first row and others\n    - Header rows often use bold formatting or different background colors\n\n2. **Content Analysis**:\n    - Headers typically contain text while data rows may contain mixed types\n    - Headers often use naming conventions (camelCase, Title Case, etc.)\n    - Headers don't follow the pattern/format of subsequent data rows\n    - Headers rarely contain numeric-only values (unless they're codes)\n\n3. **Source Context**:\n    - Business reports almost always include headers\n    - Data exports from systems typically include column names\n    - Machine-generated data might skip headers\n    - Scientific or technical data sometimes omits headers\n\n**Usage guidance for ai agents**\n\n**Recommend `hasHeaderRow: true` when**\n\n1. **Standard Business Data**:\n    - Most business Excel files include headers\n    - Reports and exports from business systems\n    - Files intended for human readability\n    - When column names provide important context\n\n2. **Integration Requirements**:\n    - When field names are needed for mapping\n    - When data needs to be self-describing\n    - When header names match target system fields\n    - For maintaining field identity across systems\n\n**Recommend `hasHeaderRow: false` when**\n\n1. **Special Data Types**:\n    - Scientific or sensor data without labels\n    - Machine-generated output files\n    - Legacy system exports with position-based fields\n    - When all rows contain actual data values\n\n2. **Technical Scenarios**:\n    - When the first row contains required data\n    - When column positions are used for mapping\n    - When headers are inconsistent or misleading\n    - For maximum data extraction with minimal configuration\n\n**Implementation notes**\n\n- This setting affects all worksheets in multi-sheet processing\n- Excel column names with spaces or special characters may be normalized\n- Duplicate header names will be made unique with suffixes\n- Empty header cells will get automatically generated names\n- Maximum recommended header length: 64 characters\n- Consider pre-processing files without headers to add them for clarity\n"}}},"xml":{"type":"object","description":"Configuration settings for parsing XML (Extensible Markup Language) files. This object defines how the system navigates and extracts hierarchical data from XML documents, enabling processing of structured markup data.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"xml\". This configuration is required for properly parsing:\n- Standard XML files (.xml)\n- SOAP API responses and web service outputs\n- Industry-specific XML formats (EDI, NIEM, UBL, etc.)\n- Document-oriented data with hierarchical structure\n\n**Xml parsing characteristics**\n\n- **Hierarchical Structure**: Processes nested elements and attributes\n- **Schema Independence**: Works with or without formal XML schemas\n- **Node Selection**: Uses XPath to precisely target record elements\n- **Namespace Support**: Handles XML namespaces in complex documents\n\n**Implementation strategy for ai agents**\n\n1. **Document Analysis**:\n    - Examine the XML structure to identify repeating elements (records)\n    - Determine the hierarchical level where target records exist\n    - Identify any namespaces that must be addressed\n    - Note attributes vs. element content patterns\n\n2. **Common XML Data Patterns**:\n\n    **Simple Element List**\n    ```xml\n    <Records>\n      <Record id=\"1\">\n        <Name>Product 1</Name>\n        <Price>10.99</Price>\n      </Record>\n      <Record id=\"2\">\n        <Name>Product 2</Name>\n        <Price>20.99</Price>\n      </Record>\n    </Records>\n    ```\n    - Records are identical element types with similar structure\n    - Direct children of a container element\n    - XPath: `/Records/Record`\n\n    **Namespaced xml**\n    ```xml\n    <soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\">\n      <soap:Body>\n        <ns:GetCustomersResponse xmlns:ns=\"http://example.com/api\">\n          <ns:Customer id=\"1\">\n            <ns:Name>Acme Corp</ns:Name>\n          </ns:Customer>\n          <ns:Customer id=\"2\">\n            <ns:Name>Globex Inc</ns:Name>\n          </ns:Customer>\n        </ns:GetCustomersResponse>\n      </soap:Body>\n    </soap:Envelope>\n    ```\n    - Elements use XML namespaces\n    - Records are nested within service response structures\n    - XPath: `//ns:Customer` or `/soap:Envelope/soap:Body/ns:GetCustomersResponse/ns:Customer`\n\n    **Heterogeneous Records**\n    ```xml\n    <Feed>\n      <Entry type=\"product\">\n        <ProductId>123</ProductId>\n        <Name>Widget</Name>\n      </Entry>\n      <Entry type=\"category\">\n        <CategoryId>A5</CategoryId>\n        <Label>Supplies</Label>\n      </Entry>\n    </Feed>\n    ```\n    - Same element type may have different internal structures\n    - Usually identified by an attribute or child element type\n    - May require multiple export configurations\n    - XPath: `/Feed/Entry[@type=\"product\"]`\n\n**Xpath query formulation**\n\nXPath is a powerful language for selecting nodes in XML documents. When formulating a resourcePath:\n\n- **Absolute Paths** (starting with `/`): Select from the document root\n- **Relative Paths** (no leading `/`): Select from the current context\n- **Any-Level Selection** (`//`): Select matching nodes regardless of location\n- **Predicates** (`[]`): Filter elements based on attributes or content\n- **Attribute Selection** (`@`): Select attribute values instead of elements\n\n**Error prevention**\n\n- **Invalid XPath**: Test the resourcePath against sample data before deployment\n- **Namespace Issues**: Ensure proper namespace handling in complex documents\n- **Empty Results**: Verify that the XPath selects the intended nodes and not an empty set\n- **Encoding Problems**: Use the correct encoding setting for international content\n\n**Optimization opportunities**\n\n- For large XML files, use more specific XPaths to reduce processing overhead\n- For complex structures, consider preprocessing to simplify before parsing\n- For SOAP responses, extract just the response body before processing\n- For repeating integration, document the exact XPath with examples\n","properties":{"resourcePath":{"type":"string","description":"Specifies the XPath expression used to locate record elements within the XML document. This critical field determines which XML nodes are treated as individual records for processing.\n\n**Behavior**\n\n- **Purpose**: Identifies which elements in the XML represent individual records\n- **Format**: Uses XPath syntax to select nodes from the document structure\n- **Requirement**: MANDATORY for XML processing - no default value exists\n- **Result**: Each XML element matching the XPath is processed as one record\n\n**Xpath syntax guidance**\n\n**Core XPath Patterns**\n\n1. **Direct Child Selection** (`/Root/Element`):\n    ```xml\n    <Root>\n      <Element>Record 1</Element>\n      <Element>Record 2</Element>\n    </Root>\n    ```\n    - XPath: `/Root/Element`\n    - Selects elements that are direct children following exact path\n    - Most precise, requires exact hierarchy knowledge\n    - Recommended when structure is consistent and well-known\n\n2. **Any-Level Selection** (`//Element`):\n    ```xml\n    <Root>\n      <Section>\n        <Element>Record 1</Element>\n      </Section>\n      <Container>\n        <Element>Record 2</Element>\n      </Container>\n    </Root>\n    ```\n    - XPath: `//Element`\n    - Selects all matching elements regardless of location\n    - More flexible, works across varying structures\n    - Use when element hierarchy may vary or is unknown\n\n3. **Filtered Selection** (`//Element[@attr=\"value\"]`):\n    ```xml\n    <Root>\n      <Element type=\"product\">Record 1</Element>\n      <Element type=\"category\">Not a record</Element>\n      <Element type=\"product\">Record 2</Element>\n    </Root>\n    ```\n    - XPath: `//Element[@type=\"product\"]`\n    - Selects only elements matching both name and attribute criteria\n    - Precise targeting when elements have identifying attributes\n    - Useful for heterogeneous XML with type indicators\n\n**Advanced Selection Techniques**\n\n1. **Position-Based** (`/Root/Element[1]`):\n    - Selects first element only\n    - Use when only certain occurrences should be processed\n\n2. **Content-Based** (`//Element[contains(text(),\"Value\")]`):\n    - Selects elements containing specific text\n    - Useful for filtering based on content\n\n3. **Parent-Relative** (`//Parent[Child=\"Value\"]/Element`):\n    - Selects elements with specific sibling or parent conditions\n    - Powerful for complex structural conditions\n\n**Namespace handling**\n\nWhen working with namespaced XML:\n\n```xml\n<soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"\n              xmlns:ns=\"http://example.com/api\">\n  <soap:Body>\n    <ns:Response>\n      <ns:Customer>Record 1</ns:Customer>\n      <ns:Customer>Record 2</ns:Customer>\n    </ns:Response>\n  </soap:Body>\n</soap:Envelope>\n```\n\nThe system automatically handles namespaces, but for clarity and precision:\n\n1. **Namespace-Aware Path**:\n    - XPath: `/soap:Envelope/soap:Body/ns:Response/ns:Customer`\n    - Include namespace prefixes as they appear in the document\n\n2. **Namespace-Agnostic Path**:\n    - XPath: `//Customer` or `//*[local-name()=\"Customer\"]`\n    - Use when you want to ignore namespaces entirely\n\n**Determination strategy for ai agents**\n\n1. **Identify Record Elements**:\n    - Look for repeating elements that represent individual \"rows\" of data\n    - These elements typically have the same name and similar structure\n    - They often contain multiple child elements representing \"fields\"\n\n2. **Analyze Element Hierarchy**:\n    - Note the path from root to record elements\n    - Determine if records appear at consistent locations or vary\n    - Check if they need to be filtered by attributes or position\n\n3. **Test Path Specificity**:\n    - More specific paths reduce processing overhead but are less flexible\n    - More general paths (with `//`) are robust to structure changes but less efficient\n    - Balance specificity with flexibility based on source stability\n\n**Common xpath patterns by source**\n\n| Source Type | Common XPath Pattern | Example |\n|-------------|----------------------|---------|\n| SOAP APIs | `/Envelope/Body/*/Response/*` | `/soap:Envelope/soap:Body/ns:GetOrdersResponse/ns:Order` |\n| REST XML | `/Response/Results/*` | `/ApiResponse/Results/Customer` |\n| Feeds | `/Feed/Entry` or `/Feed/Item` | `/rss/channel/item` |\n| Documents | `//Section/Item` | `//Chapter/Paragraph` |\n| EDI/Business | `/Document/Transaction/Line` | `/Invoice/LineItems/Item` |\n\n**Troubleshooting indicators**\n\nIf you encounter these issues, review the resourcePath:\n\n- Export completes successfully but processes 0 records\n- Records contain unexpected or partial data\n- Only first level of data is extracted (missing nested content)\n- Namespace-related \"element not found\" errors\n\n**Implementation notes**\n\n- XPath is case-sensitive; element and attribute names must match exactly\n- Each matching element becomes a separate record for processing\n- Child elements become fields in the processed record\n- Attributes can be included in field data if needed\n- Namespaces are handled automatically but may require explicit prefixes\n- Testing with an XPath tool on sample data is highly recommended\n"}}},"fileDefinition":{"type":"object","description":"Configuration settings for parsing files using a predefined file definition. This object enables processing of complex, non-standard, or proprietary file formats that require specialized parsing logic beyond what the standard parsers (CSV, JSON, XML, etc.) can handle.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"filedefinition\". This approach is required for properly handling:\n- Legacy or proprietary file formats with complex structures\n- Fixed-width text files where field positions are defined by character positions\n- Electronic Data Interchange (EDI) documents (X12, EDIFACT, etc.)\n- Multi-record type files where different lines have different formats\n- Files requiring complex preprocessing or custom parsing logic\n\n**File definition characteristics**\n\n- **Custom Parsing Rules**: Applies predefined parsing logic to complex file formats\n- **Reusable Configurations**: References externally defined parsing rules that can be reused\n- **Complex Format Support**: Handles formats that standard parsers cannot process\n- **Specialized Processing**: Often used for industry-specific or legacy formats\n\n**Implementation strategy for ai agents**\n\n1. **Format Analysis**:\n    - Determine if the file format is standard (CSV, JSON, XML) or requires custom parsing\n    - Check if the format follows industry standards like EDI, SWIFT, or fixed-width\n    - Assess if there are multiple record types within the same file\n    - Identify if specialized logic is needed to interpret the file structure\n\n2. **File Definition Selection**:\n    - Verify that a suitable file definition has already been created in the system\n    - Check if existing file definitions match the format requirements\n    - Confirm the file definition ID from system administrators if needed\n    - Ensure the file definition is compatible with the export's needs\n\n**Use case scenarios**\n\n**Fixed-Width Files**\n\nFiles where each field has a specific starting position and length:\n```\nCUST00001JOHN     DOE       123 MAIN ST\nCUST00002JANE     SMITH     456 OAK AVE\n```\n- Fields are positioned by character count rather than delimiters\n- Requires precise position and length definitions\n- Common in legacy mainframe and banking systems\n\n**Edi Documents**\n\nElectronic Data Interchange formats for business transactions:\n```\nISA*00*          *00*          *ZZ*SENDER         *ZZ*RECEIVER       *...\nGS*PO*SENDER*RECEIVER*20210101*1200*1*X*004010\nST*850*0001\nBEG*00*SA*123456**20210101\n...\n```\n- Highly structured with segment identifiers and element separators\n- Contains multiple record types with different structures\n- Requires complex parsing rules and validation\n\n**Multi-Record Files**\n\nFiles containing different record types identified by indicators:\n```\nH|SHIPMENT|20210115|PRIORITY\nD|ITEM001|5|WIDGET|RED\nD|ITEM002|10|GADGET|BLUE\nT|2|15|COMPLETE\n```\n- Each line starts with a record type indicator\n- Different record types have different field structures\n- Requires conditional processing based on record type\n\n**Error prevention**\n\n- **Definition Mismatch**: Ensure the file definition matches the actual file format\n- **Missing Definition**: Verify the file definition exists before referencing it\n- **Access Issues**: Confirm the integration has permission to use the file definition\n- **Version Compatibility**: Check if file definition version matches current file format\n\n**Optimization opportunities**\n\n- Document which file definition is used and why it's appropriate for the file format\n- Consider creating purpose-specific file definitions for complex formats\n- Test file definitions with sample files before deploying in production\n- Maintain documentation of the file structure alongside the file definition reference\n","properties":{"_fileDefinitionId":{"type":"string","format":"objectId","description":"The unique identifier of the file definition to use for parsing the file. This ID references a preconfigured file definition resource that contains the detailed parsing instructions for a specific file format.\n\n**Field behavior**\n\n- **Purpose**: References an existing file definition resource in the system\n- **Format**: MongoDB ObjectId (24-character hexadecimal string)\n- **Requirement**: MANDATORY when type=\"filedefinition\"\n- **Validation**: Must reference a valid, accessible file definition\n\n**Understanding file definitions**\n\nA file definition is a separate resource that defines:\n\n1. **Record Structure**:\n    - Field names, positions, and data types\n    - Record identifiers and format specifications\n    - Parsing rules and field extraction logic\n\n2. **Processing Rules**:\n    - How to identify different record types\n    - How to handle headers, footers, and details\n    - Data validation and transformation rules\n\n3. **Format-Specific Settings**:\n    - For fixed-width: Character positions and field lengths\n    - For EDI: Segment identifiers and element separators\n    - For proprietary formats: Custom parsing instructions\n\n**Obtaining the correct id**\n\nTo identify the appropriate file definition ID:\n\n1. **System Administration**:\n    - Check with system administrators for a list of available file definitions\n    - Request the specific ID for the file format you need to process\n    - Verify the file definition's compatibility with your file format\n\n2. **File Definition Catalog**:\n    - If available, consult the file definition catalog in the system\n    - Search for definitions matching your file format requirements\n    - Note the ObjectId of the appropriate definition\n\n3. **Custom Definition Creation**:\n    - If no suitable definition exists, request creation of a new one\n    - Provide sample files and format specifications\n    - Obtain the new file definition's ID after creation\n\n**Implementation guidance for ai agents**\n\n**Recommendation Framework**\n\nWhen implementing a file definition-based export:\n\n1. **Verify Definition Existence**:\n    - Confirm the file definition exists before configuration\n    - Do not guess or generate random IDs\n    - Request specific ID from system administrators\n\n2. **Documentation Requirements**:\n    - Document which file definition is being used and why\n    - Note any specific requirements or limitations of the definition\n    - Record the mapping between file fields and integration needs\n\n3. **Testing Approach**:\n    - Recommend testing with sample files before production use\n    - Verify all required fields are correctly extracted\n    - Validate the parsing results meet integration requirements\n\n**Common File Definition Categories**\n\n| Category | Description | Example Formats |\n|----------|-------------|----------------|\n| Fixed-Width | Fields defined by character positions | Banking transactions, government reports |\n| EDI | Electronic Data Interchange standards | X12, EDIFACT, TRADACOMS |\n| Hierarchical | Complex parent-child structures | Specialized industry formats |\n| Multi-Record | Different record types in one file | Inventory systems, financial exports |\n| Proprietary | Custom or legacy system formats | Mainframe exports, specialized software |\n\n**Technical considerations**\n\n- File definitions are reusable across multiple exports\n- Changes to a file definition affect all exports using it\n- File definitions may have version dependencies\n- Some file definitions may require specific pre-processing settings\n- Performance impact varies based on definition complexity\n\n**Troubleshooting indicators**\n\nIf you encounter these issues, verify the file definition ID:\n\n- \"File definition not found\" errors\n- Unexpected field mapping or missing fields\n- Data type conversion errors\n- Parsing failures with specific record types\n\nAlways document the exact file definition ID with its purpose to facilitate troubleshooting and maintenance.\n"}}},"filter":{"allOf":[{"description":"Configuration for selectively processing files based on specified criteria. This object enables precise\ncontrol over which files are included or excluded from the export operation.\n\n**Filter behavior**\n\nWhen configured, the filter is applied before file processing begins:\n- Files that match the filter criteria are processed\n- Files that don't match are completely skipped\n- No partial file processing is performed\n\n**Available filter fields**\n\nThe specific fields available for file filtering are the contained in the `fileMeta` property.\n\n**Common Filter Fields**\n\nThese are the most commonly available fields across most file providers:\n\n1. **filename**: The name of the file (with extension)\n  - Example filter: Match files with specific extensions or naming patterns\n  - Usage: `[\"endswith\", [\"extract\", \"filename\"], \".csv\"]`\n\n2. **filesize**: The size of the file in bytes\n  - Example filter: Skip files that are too large or too small\n  - Usage: `[\"lessthan\", [\"number\", [\"extract\", \"filesize\"]], 1000000]`\n\n3. **lastmodified**: The last modification timestamp of the file\n  - Example filter: Process only files created/modified within a specific date range\n  - Usage: `[\"greaterthan\", [\"extract\", \"lastmodified\"], \"2023-01-01T00:00:00Z\"]`\n"},{"$ref":"#/components/schemas/Filter"}]},"backupPath":{"type":"string","description":"The file system path where backup files will be stored before processing. This path specifies a directory location where the system will create backup copies of files before they are processed by the export flow.\n\n**Backup mechanism overview**\n\nThe backup mechanism creates a copy of source files in the specified location before processing begins. This provides:\n\n- **Data Safety**: Preserves original files in case of processing errors\n- **Audit Trail**: Maintains historical record of exported data\n- **Recovery Option**: Enables reprocessing from original files if needed\n- **Compliance Support**: Helps meet data retention requirements\n\n**Path configuration guidelines**\n\nThe path format must follow these conventions:\n\n- **Absolute Paths**: Must start with \"/\" (Unix/Linux) or include drive letter (Windows)\n- **Relative Paths**: Interpreted relative to the application's working directory\n- **Network Paths**: Can use UNC format (\\\\server\\share\\path) or mounted network drives\n- **Access Requirements**: The path must be writable by the service account running the integration\n\n**Implementation strategy for ai agents**\n\nWhen configuring the backup path, consider these factors:\n\n1. **Storage Capacity Planning**:\n    - Estimate average file sizes and volumes\n    - Calculate required storage based on retention period\n    - Implement monitoring for storage utilization\n    - Plan for storage growth based on business projections\n\n2. **Path Selection Criteria**:\n    - Choose locations with sufficient disk space\n    - Ensure appropriate read/write permissions\n    - Select paths with reliable access (avoid temporary or volatile storage)\n    - Consider network latency for remote locations\n\n3. **Backup Naming Convention**:\n    - Default: Original filename with timestamp suffix\n    - Custom: Can be controlled through integration settings\n    - Avoid paths that may contain special characters that need escaping\n    - Consider filename length limitations of target filesystem\n\n4. **Security Considerations**:\n    - Restrict access to backup location to authorized personnel only\n    - Avoid public-facing directories\n    - Consider encryption for sensitive data backups\n    - Implement appropriate file permissions\n\n**Backup strategy recommendations**\n\n| Data Sensitivity | Recommended Approach | Path Considerations |\n|------------------|----------------------|---------------------|\n| Low | Local directory backup | Fast access, limited protection |\n| Medium | Network share with permissions | Balanced access/protection |\n| High | Secure storage with encryption | Highest protection, potential performance impact |\n| Regulated | Compliant storage with audit trail | Must meet specific regulatory requirements |\n\n**Integration patterns**\n\n**Temporary Processing Pattern**\n\nFor short-term processing needs:\n```\n/tmp/exports/backups\n```\n- Files stored temporarily during processing\n- Limited retention period\n- Optimized for processing speed\n- May be automatically cleaned up\n\n**Long-term Archival Pattern**\n\nFor regulatory or business retention requirements:\n```\n/archive/exports/2023/Q4\n```\n- Organized by time period\n- Structured for easy retrieval\n- May include additional metadata\n- Designed for long-term storage\n\n**Cloud Storage Pattern**\n\nFor scalable, managed storage:\n```\n/mnt/cloud/exports/client123\n```\n- Mounted cloud storage location\n- Potentially unlimited capacity\n- May include built-in versioning\n- Often includes automatic replication\n\n**Error handling guidance**\n\nWhen configuring backup paths, anticipate these common issues:\n\n- **Permission Denied**: Ensure service account has write access\n- **Path Not Found**: Verify directory exists or create it programmatically\n- **Disk Full**: Monitor storage capacity and implement alerts\n- **Path Too Long**: Be aware of filesystem path length limitations\n\n**Technical considerations**\n\n- Backup operations may impact performance for large files\n- Network paths may introduce latency and availability concerns\n- Some filesystems have case sensitivity differences (important for path matching)\n- Path separators vary by platform (/ vs \\)\n- Special characters in paths may require escaping in certain contexts\n- Consider implementing automatic cleanup policies for backups\n\n**System administration notes**\n\n- Backup paths should be included in system backup procedures\n- Monitor space utilization on backup volumes\n- Implement appropriate retention policies\n- Document backup path locations in system configuration\n- Consider periodic validation of backup file integrity\n"}}},"Filter":{"type":"object","description":"Configuration for selectively processing records based on specified criteria. This object enables\nprecise control over which items are included or excluded from processing operations.\n\n**Filter behavior**\n\nWhen configured, the filter is applied before processing begins:\n- Items that match the filter criteria are processed\n- Items that don't match are completely skipped\n- No partial processing is performed\n\n**Implementation approaches**\n\nThere are two distinct filtering mechanisms available:\n\n**Rule-Based Filtering (`type: \"expression\"`)**\n- **Best For**: Common filtering patterns based on standard attributes\n- **Capabilities**: Filter by names, values, dates, numerical ranges, text patterns\n- **Advantages**: Declarative, no coding required, consistent performance\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear, static criteria for selection\n\n**Script-Based Filtering (`type: \"script\"`)**\n- **Best For**: Complex logic, dynamic criteria, or business rules\n- **Capabilities**: Full programmatic control, access to complete metadata\n- **Advantages**: Maximum flexibility, can implement any filtering logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Simple rules aren't sufficient or logic needs to be dynamic\n","properties":{"type":{"type":"string","description":"Determines which filtering mechanism to use. This choice affects which properties\nmust be configured and how filtering logic is implemented.\n\n**Available types**\n\n**Rule-Based Filtering (`\"expression\"`)**\n- **Required Config**: The `expression` object with rule definitions\n- **Behavior**: Evaluates declarative rules against item attributes\n- **Best For**: Common patterns like name matching, date ranges, value limits\n- **Advantages**: Simpler to configure, no custom code required\n\n**Script-Based Filtering (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to determine which items to process\n- **Best For**: Complex conditions, business logic, dynamic criteria\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard filtering needs (name, size, date), use `\"expression\"`\n2. For complex logic or conditions not covered by expressions, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based filtering. This object enables filtering\nitems based on common attributes without requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define filtering rules that\ncan match against item attributes like name, type, value, date, and other properties.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Rules can be combined with AND/OR logic\n- Each rule can check a specific attribute\n- Multiple conditions can be applied (ranges, pattern matching, exact matches)\n\n**Common filter patterns**\n\n1. **Pattern matching**: Using wildcards like `*` and `?`\n2. **Value range filtering**: Numbers between min and max values\n3. **Date range filtering**: Items created/modified within specific time ranges\n4. **Status checking**: Items with specific status values or properties\n\nFor AI agents: Rule-based filtering should be your first choice when the filtering criteria\ncan be expressed in terms of standard attributes. Only use script-based filtering when\nmore complex logic is required.\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"1\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"1\"\nfor current implementations.\n","enum":["1"]},"rules":{"type":"array","description":"Expression array defining filter conditions using prefix notation. The first element is the operator,\nfollowed by its operands which may themselves be nested expression arrays.\n\nThe rule expression follows this pattern:\n- First element: Operator name (string)\n- Remaining elements: Operands for that operator (values or nested expressions)\n\n**Expression structure**\n\nFilter expressions use a prefix notation where operators appear before their operands:\n```\n[operator, operand1, operand2, ...]\n```\n\n**Comparison Operators**\n- `\"equals\"`: Exact match (equals)\n- `\"notequals\"`: Not equal to value (not equals)\n- `\"greaterthan\"`: Value is greater than specified value (is greater than)\n- `\"greaterthanequals\"`: Value is greater than or equal to specified value (is greater than or equals)\n- `\"lessthan\"`: Value is less than specified value (is less than)\n- `\"lessthanequals\"`: Value is less than or equal to specified value (is less than or equals)\n- `\"startswith\"`: String starts with specified prefix (starts with)\n- `\"endswith\"`: String ends with specified suffix (ends with)\n- `\"contains\"`: String contains specified substring (contains)\n- `\"doesnotcontain\"`: String does not contain specified substring (does not contain)\n- `\"isempty\"`: Field is empty or null (is empty)\n- `\"isnotempty\"`: Field contains a value (is not empty)\n- `\"matches\"`: Matches specified pattern (matches)\n\n**Logical Operators**\n- `\"and\"`: All conditions must be true\n- `\"or\"`: At least one condition must be true\n- `\"not\"`: Negates the condition\n\n**Field Access and Type Conversion**\n- `\"extract\"`: Access a field from the item by name\n- `\"settings\"`: Access a custom setting from the flow, flow step, or integration configuration\n- `\"boolean\"`: Convert value to Boolean type\n- `\"epochtime\"`: Convert value to Epoch Time (Unix timestamp)\n- `\"number\"`: Convert value to Number type\n- `\"string\"`: Convert value to String type\n\n**Field Access Details**\n\n**Using `extract` to access record fields:**\n- Retrieves values from the current record being processed\n- Can access nested properties using dot notation (e.g., `\"customer.email\"`)\n- Returns the raw field value which may need type conversion\n\n**Using `settings` to access configuration values:**\n- Retrieves values from the integration's configuration settings\n- Supports different scopes with prefix notation:\n  - `flow.settingName`: Access flow-level settings\n  - `export.settingName`: Access export-level settings\n  - `import.settingName`: Access import-level settings\n  - `integration.settingName`: Access integration-level settings\n- Useful for dynamic filtering based on configuration\n\n**Field Transformations**\n- `\"lowercase\"`: Convert string to lowercase\n- `\"uppercase\"`: Convert string to uppercase\n- `\"ceiling\"`: Round number up to the nearest integer\n- `\"floor\"`: Round number down to the nearest integer\n- `\"abs\"`: Get absolute value of a number\n\nType conversion operators are often necessary when comparing extracted field values against literals or when the field type doesn't match the comparison operator's expected type. For example:\n\n```json\n[\n  \"equals\",\n  [\n    \"number\",  // Convert to number before comparison\n    [\n      \"extract\",\n      \"quantity\"\n    ]\n  ],\n  100\n]\n```\n\nExample with datetime conversion:\n```json\n[\n  \"greaterthan\",\n  [\n    \"epochtime\",  // Convert to Unix timestamp before comparison\n    [\n      \"extract\",\n      \"createdDate\"\n    ]\n  ],\n  1609459200000  // January 1, 2021 as Unix timestamp in milliseconds\n]\n```\n\nExample with transformations:\n```json\n[\n  \"and\",\n  [\n    \"matches\",\n    [\n      \"lowercase\",  // Convert to lowercase before matching\n      [\n        \"string\",\n        [\n          \"extract\",\n          \"categories\"\n        ]\n      ]\n    ],\n    \"netsuite\"\n  ],\n  [\n    \"notequals\",\n    [\n      \"string\",\n      [\n        \"extract\",\n        \"recurrence.pattern.type\"\n      ]\n    ],\n    \"\"\n  ]\n]\n```\n\nExample comparing a record field with a flow setting:\n```json\n[\n  \"equals\",\n  [\n    \"string\",\n    [\n      \"extract\",\n      \"trantype\"\n    ]\n  ],\n  [\n    \"string\",\n    [\n      \"settings\",\n      \"flow.trantype\"\n    ]\n  ]\n]\n```\n\n**Examples**\n\nExample 1: Status field is not equal to \"cancelled\"\n```json\n[\n  \"notequals\",\n  [\n    \"extract\",\n    \"status\"\n  ],\n  \"cancelled\"\n]\n```\n\nExample 2: Filename starts with \"HC\"\n```json\n[\n  \"startswith\",\n  [\n    \"extract\",\n    \"filename\"\n  ],\n  \"HC\"\n]\n```\n\nExample 3: Amount is greater than 100\n```json\n[\n  \"greaterthan\",\n  [\n    \"number\",\n    [\n      \"extract\",\n      \"amount\"\n    ]\n  ],\n  100\n]\n```\n\nExample 4: Order date is after January 1, 2023\n```json\n[\n  \"greaterthan\",\n  [\n    \"extract\",\n    \"orderDate\"\n  ],\n  \"2023-01-01T00:00:00Z\"\n]\n```\n\nExample 5: Category contains any of [\"Urgent\", \"High Priority\"]\n```json\n[\n  \"anyof\",\n  [\n    \"extract\",\n    \"category\"\n  ],\n  [\"Urgent\", \"High Priority\"]\n]\n```\n","items":{"oneOf":[{"title":"String","type":"string"},{"title":"Number","type":"number"},{"title":"Boolean","type":"boolean"},{"title":"Object","type":"object"},{"title":"Array","type":"array"}]}}}},"script":{"type":"object","description":"Configuration for programmable script-based filtering. This object enables complex, custom\nfiltering logic beyond what expression-based filtering can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to determine which items\nshould be processed.\n\n**Implementation approach**\n\nScript-based filtering works by:\n1. Executing the specified function from the referenced script\n2. Passing item data to the function\n3. Using the function's return value (true/false) to determine inclusion\n\n**Common use cases**\n\nScript filtering is ideal for:\n- Complex business logic that can't be expressed as simple rules\n- Dynamic filtering criteria that change based on external factors\n- Content-based filtering that requires deep inspection\n- Advanced pattern matching beyond simple wildcards\n- Multi-stage filtering with intermediate logic\n\nFor AI agents: Only use script-based filtering when expression-based filtering is insufficient.\nScript filtering requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to the Script resource that contains the filtering logic. This must be a valid\nObjectId of a Script resource that exists in the system.\n\nThe referenced script must contain the function specified in the `function` field\nand must be written to handle filtering specifically. The script receives\nitem data as its input and must return a boolean value indicating whether\nto process the item (true) or skip it (false).\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n"},"function":{"type":"string","description":"Name of the function within the script to execute for filtering decisions. This function\nmust exist in the script referenced by _scriptId.\n\n**Function requirements**\n\nThe specified function must:\n- Accept item data as its first parameter\n- Return a boolean value (true to process the item, false to skip it)\n- Handle errors gracefully\n- Execute efficiently (as it may run for many items)\n\n**Function signature**\n\n```javascript\nfunction filterItems(itemData) {\n  // itemData contains properties of the item being evaluated\n  // Custom logic here\n  return true; // or false to skip the item\n}\n```\n\nFor AI agents: Ensure the function name exactly matches a function defined in the\nreferenced script, as mismatches will cause the filter to fail.\n"}}}}},"Http":{"type":"object","description":"Configuration for HTTP exports.\n\nIMPORTANT: When the _connectionId field points to a connection where the type is http, \nthis object MUST be populated for the export to function properly. This is a required configuration\nfor all HTTP based exports, as determined by the connection associated with the export.\n","properties":{"type":{"type":"string","enum":["file"],"description":"**Important:** This field should be LEFT UNDEFINED for the vast majority of HTTP exports.\n\nThis is an OPTIONAL field that should only be set in rare, specific cases. For standard REST API exports\n(Shopify, Salesforce, NetSuite, custom REST APIs, etc.), this field MUST be left undefined.\n\n**When to leave this field undefined (MOST common CASE)**\n\nLeave this field undefined for ALL standard data exports, including:\n- REST API exports that return JSON records\n- APIs that return XML records or structured data\n- Any export that retrieves business records, entities, or data objects\n- Standard CRUD operations that return record collections\n- GraphQL queries that return structured data\n- SOAP APIs that return structured responses\n\nExamples of exports that should have this field undefined:\n- \"Export all Shopify Customers\" → undefined (returns JSON customer records)\n- \"Retrieve orders from custom REST API\" → undefined (returns JSON order records)\n\n**When to set this field to 'file' (RARE use CASE)**\n\nSet this field to 'file' ONLY when the HTTP endpoint is specifically designed to download files:\n- The endpoint returns raw binary file content (PDFs, images, ZIP files, etc.)\n- The endpoint is a file download service (e.g., downloading invoices, reports, attachments)\n- The response body contains file data that needs to be saved as a file, not parsed as records\n- You need to download and process files from a remote server\n\nExamples of when to set type: \"file\":\n- \"Download PDF invoices from the API\" → type: \"file\"\n- \"Retrieve image files from a file server\" → type: \"file\"\n- \"Download CSV files from an FTP server via HTTP\" → type: \"file\"\n\n**Implementation details**\n\nWhen this field is set to 'file':\n- The 'file' object property MUST also be configured\n- The export appears as a \"Transfer\" step in the Flow Builder UI\n- The system applies file-specific processing to the HTTP response\n- Downstream steps receive file content rather than record data\n\nWhen this field is undefined (default for most exports):\n- The export appears as a standard \"Export\" step in the Flow Builder UI\n- The system parses the HTTP response as structured data (JSON, XML, etc.)\n- Downstream steps receive record data that can be mapped and transformed\n\n**Decision flowchart**\n\n1. Does the API endpoint return business records/entities (customers, orders, products, etc.)?\n   → YES: Leave this field undefined\n2. Does the API endpoint return structured data (JSON objects, XML records)?\n   → YES: Leave this field undefined\n3. Does the API endpoint return raw file content (PDFs, images, binary data)?\n   → YES: Set this field to \"file\" (and configure the 'file' property)\n\nRemember: When in doubt, leave this field undefined. Most HTTP exports are standard data exports.\n"},"method":{"type":"string","description":"HTTP method used for the export request to retrieve data from the target API.\n\n- GET: Most commonly used for data retrieval operations (default)\n- POST: Used when request body criteria are needed, especially for RPC or SOAP/XML APIs\n- PUT: Available for specific APIs that support it for data retrieval\n- PATCH/DELETE: Less common for exports but available for specialized use cases\n\nConsult your target API's documentation to determine the appropriate method.\n","enum":["GET","POST","PUT","PATCH","DELETE"]},"relativeURI":{"type":"string","description":"The resource path portion of the API endpoint used for this export.\n\nThis value is combined with the baseURI defined in the associated connection to form the complete API endpoint URL. \n\nThe entire relativeURI can be defined using handlebars expressions to create dynamic paths:\n\nExamples:\n- Simple resource paths: \"/products\", \"/orders\", \"/customers\"\n- With query parameters: \"/orders?status=pending\", \"/products?category=electronics&limit=100\"\n- With path parameters: \"/customers/{{record.customerId}}/orders\", \"/accounts/{{record.accountId}}/transactions\"\n- With dynamic query values: \"/orders?since={{lastExportDateTime}}\"\n- Fully dynamic path: \"{{record.dynamicPath}}\"\n\nPath parameters, query parameters, or the entire URI can be dynamically generated using handlebars syntax. This is particularly useful for parameterized API calls or when the endpoint needs to be determined at runtime based on data or context.\n\n**Lookup export behavior with mappings**\n\n**CRITICAL**: For lookup exports (isLookup: true) that have mappings configured, the handlebars template evaluation for relativeURI always uses the **original input record** before any mapping transformations are applied.\n\nThis design ensures that:\n- Mappings can transform the record structure for the request body without affecting URI construction\n- Essential fields like record IDs remain accessible for building dynamic endpoints\n- The request body can be optimized for the target API while preserving URI parameters\n\n**Example Scenario:**\n```\nInput record: {\"customerId\": \"12345\", \"name\": \"John Doe\", \"email\": \"john@example.com\"}\nMappings: Transform to {\"customer_name\": \"John Doe\", \"contact_email\": \"john@example.com\"}\nrelativeURI: \"/customers/{{record.customerId}}/details\"\nResult: \"/customers/12345/details\" (uses original customerId, not mapped version)\n```\n\nThis prevents situations where mapping transformations would remove or rename fields needed for endpoint construction, ensuring reliable API calls regardless of how the request body is structured.\n"},"headers":{"type":"array","description":"Export-specific HTTP headers to include with API requests. Note that common headers like authentication are typically defined on the connection record rather than here.\n\nUse this field only for headers that are specific to this particular export operation. Headers defined here will be merged with (and can override) headers from the connection.\n\nExamples of export-specific headers:\n- Accept: To request specific content format for this export only\n- X-Custom-Filter: Export-specific filtering parameters\n                \nHeader values can be defined using handlebars expressions if you need to reference any dynamic data or configurations.\n\nFor lookup exports (isLookup: true) with mappings configured, header value templates render against the **pre-mapped** record (the original input record from the upstream flow step) — mappings do not rewrite header evaluation.\n","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}}},"requestMediaType":{"type":"string","description":"Override request media type. Use this field to handle the use case where the HTTP request requires a different media type than what is configured on the connection.\n\nMost APIs use a consistent media type across all endpoints, which should be configured at the connection resource. Use this field only when:\n\n- This specific endpoint requires a different format than other endpoints in the API\n- You need to override the connection-level setting for this particular export only\n\nCommon values:\n- \"json\": For JSON request bodies (Content-Type: application/json)\n- \"xml\": For XML request bodies (Content-Type: application/xml)\n- \"urlencoded\": For URL-encoded form data (Content-Type: application/x-www-form-urlencoded)\n- \"form-data\": For multipart form data, typically used for file uploads\n- \"plaintext\": For plain text content\n","enum":["json","xml","urlencoded","form-data","plaintext"]},"body":{"type":"string","description":"The HTTP request body to send with POST, PUT, or PATCH requests. This field is typically used to:\n\n1. Send query parameters to APIs that require them in the request body (e.g., GraphQL or SOAP APIs)\n2. Provide filtering criteria for data exports\n\nThe body content must match the format specified in the requestMediaType field (JSON, XML, etc.).\n\nYou can use handlebars expressions to create dynamic content:\n```\n{\n  \"query\": \"SELECT Id, Name FROM Account WHERE LastModifiedDate > {{lastExportDateTime}}\",\n  \"parameters\": {\n    \"customerId\": \"{{record.customerId}}\",\n    \"limit\": 100\n  }\n}\n```\n\nFor XML or SOAP requests:\n```\n<request>\n  <filter>\n    <updatedSince>{{lastExportDateTime}}</updatedSince>\n    <type>{{record.type}}</type>\n  </filter>\n</request>\n```\n"},"successMediaType":{"type":"string","description":"Specifies the media type (content type) expected in successful responses for this specific export. This field should only be used when:\n\n1. The response format differs from the request format\n\nMost APIs return responses in the same format as the request, so this field is often unnecessary.\n\nCommon values:\n- \"json\": For JSON responses (typically with Content-Type: application/json)\n- \"xml\": For XML responses (typically with Content-Type: application/xml)\n- \"csv\": For CSV data (typically with Content-Type: text/csv)\n- \"plaintext\": For plain text responses\n","enum":["json","xml","csv","plaintext"]},"errorMediaType":{"type":"string","description":"Specifies the media type (content type) expected in error responses for this specific export. This field should only be used when:\n\n1. Error response format differs from the request format\n\nMost APIs return responses in the same format as the request, so this field is often unnecessary.\n\nCommon values:\n- \"json\": For JSON error responses (most common in modern APIs)\n- \"xml\": For XML error responses (common in SOAP and older REST APIs)\n- \"plaintext\": For plain text error messages\n","enum":["json","xml","plaintext"]},"_asyncHelperId":{"type":"string","format":"objectId","description":"Reference to an AsyncHelper resource that handles polling for long-running API operations.\n\nAsync helpers bridge Celigo's synchronous flow engine with asynchronous external APIs that use a \"fire-and-check-back\" pattern (HTTP 202 responses, job tickets, feed/document IDs, etc.).\n\nUse this field when the export needs to:\n- Submit a request to an API that processes data asynchronously\n- Poll for status at configured intervals\n- Retrieve results once the external process completes\n\nCommon use cases include:\n- Amazon SP-API feeds\n- Large report generators\n- File conversion services\n- Image processors\n- Any API that needs minutes or hours to complete a requested operation\n"},"once":{"type":"object","description":"HTTP configuration specific to Once exports. Used to mark records as exported after successful processing.","properties":{"relativeURI":{"type":"string","description":"The relative URI used to mark records as exported. Called as a callback to the source system after successful processing.\n\n- Must be a relative path starting with \"/\"\n- Can include Handlebars variables: \"/orders/{{record.Id}}/exported\"\n- Common patterns: dedicated status endpoint or record-specific updates\n- Renders against the **pre-mapped** record (original extracted record); mappings do not apply to the callback URI.\n"},"method":{"type":"string","description":"The HTTP method used when calling back to mark records as exported.","enum":["GET","PUT","POST","PATCH","DELETE"]},"body":{"type":"string","description":"The HTTP request body used when calling back to mark records as exported. Can include Handlebars expressions for dynamic values."}}},"paging":{"type":"object","description":"Configuration object for navigating through multi-page API responses.\n\n**Overview for ai agents**\n\nThis object is critical for retrieving large datasets that cannot be returned in a single API response.\nThe pagination implementation determines how the system will retrieve subsequent pages of data after\nthe first request, enabling complete data collection regardless of volume.\n\n**Key decision points**\n\n1. **Identify the API's pagination mechanism** (check API documentation)\n2. **Select the corresponding method** value (most important field)\n3. **Configure the required fields** based on your selected method\n4. **Add pagination variables** to your request configuration\n5. **Consider last page detection** options if needed\n\n**Field dependencies by pagination method**\n\n1. **page**: Page number-based pagination (e.g., ?page=2)\n    - Required: Set `method` to \"page\"\n    - Optional: `page` - Set if first page index is not 0 (e.g., set to 1 for APIs that start at page 1)\n    - Optional: `maxPagePath` - Path to find total pages in response\n    - Optional: `maxCountPath` - Path to find total records in response\n    - Optional: `relativeURI` - Only if subsequent page URLs differ from first page\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n2. **skip**: Offset/limit pagination (e.g., ?offset=100&limit=50)\n    - Required: Set `method` to \"skip\"\n    - Optional: `skip` - Set if first skip index is not 0\n    - Optional: `maxPagePath` - Path to find total pages in response\n    - Optional: `maxCountPath` - Path to find total records in response\n    - Optional: `relativeURI` - Only if subsequent page URLs differ from first page\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n3. **token**: Token-based pagination (e.g., ?page_token=abc123)\n    - Required: Set `method` to \"token\"\n    - Required: `path` - Location of the token in the response\n    - Required: `pathLocation` - Whether token is in \"body\" or \"header\"\n    - Optional: `token` - Set to provide initial token (rare)\n    - Optional: `pathAfterFirstRequest` - Only if token location changes after first page\n    - Optional: `relativeURI` - Only if subsequent page URLs differ from first page\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n4. **linkheader**: Link header pagination (uses HTTP Link header with rel values)\n    - Required: Set `method` to \"linkheader\"\n    - Optional: `linkHeaderRelation` - Set if relation is not the default \"next\"\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n5. **nextpageurl**: Complete next URL in response\n    - Required: Set `method` to \"nextpageurl\"\n    - Required: `path` - Location of the next URL in the response\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n6. **relativeuri**: Custom relative URI pagination\n    - Required: Set `method` to \"relativeuri\"\n    - Required: `relativeURI` - Configure using handlebars with previous_page context\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n7. **body**: Custom request body pagination\n    - Required: Set `method` to \"body\"\n    - Required: `body` - Configure using handlebars with previous_page context\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n**Pagination variables**\n\nBased on your selected method, you MUST add one of these variables to your request configuration:\n\n- For page-based: Add `{{export.http.paging.page}}` to the URI or body\n- For offset-based: Add `{{export.http.paging.skip}}` to the URI or body\n- For token-based: Add `{{export.http.paging.token}}` to the URI or body\n\n**Last page detection options**\n\nThese fields can be used with any pagination method to detect the last page:\n\n- `lastPageStatusCode` - Detect last page by HTTP status code\n- `lastPagePath` - JSON path to check for last page indicator\n- `lastPageValues` - Values at lastPagePath that indicate last page\n\n**Common implementation patterns**\n\nMost APIs require only 2-3 fields to be configured. The most common patterns are:\n\n```json\n// Page-based pagination (starting at page 1)\n{\n  \"method\": \"page\",\n  \"page\": 1\n}\n\n// Token-based pagination\n{\n  \"method\": \"token\",\n  \"path\": \"meta.nextToken\",\n  \"pathLocation\": \"body\"\n}\n\n// Link header pagination (simplest to configure)\n{\n  \"method\": \"linkheader\"\n}\n```\n\nIMPORTANT: Incorrect pagination configuration is one of the most common causes of incomplete data retrieval. Take time to properly identify and configure the correct pagination method for your API.\n","properties":{"method":{"type":"string","description":"Defines the pagination strategy that will be used to retrieve all data pages.\n\n**Importance for** AI AGENTS\n\nThis is the MOST CRITICAL field in pagination configuration. It determines:\n- Which other fields are required vs. optional\n- How subsequent pages will be requested\n- Which pagination variables must be used in requests\n- How the system detects the last page\n\n**Pagination methods and their requirements**\n\n**Page-Based** Pagination (`\"page\"`)\n```\n\"method\": \"page\"\n```\n- **Implementation**: Uses increasing page numbers (e.g., ?page=1, ?page=2)\n- **Required Setup**: Add `{{export.http.paging.page}}` to your URI or body\n- **Common Fields**: page (if starting at 1 instead of 0)\n- **API Examples**: Most REST APIs, Shopify, WordPress\n- **When to Use**: APIs that accept a page number parameter\n\n**Offset/Skip Pagination (`\"skip\"`)**\n```\n\"method\": \"skip\"\n```\n- **Implementation**: Uses increasing offset values (e.g., ?offset=0, ?offset=100)\n- **Required Setup**: Add `{{export.http.paging.skip}}` to your URI or body\n- **Common Fields**: Usually none (system handles offset increments)\n- **API Examples**: MongoDB, SQL-based APIs\n- **When to Use**: APIs that use offset/limit or skip/limit parameters\n\n**Token-Based Pagination (`\"token\"`)**\n```\n\"method\": \"token\"\n```\n- **Implementation**: Passes tokens from previous responses to get next pages\n- **Required Setup**: \n    1. Add `{{export.http.paging.token}}` to your URI or body\n    2. Set path to location of token in response\n    3. Set pathLocation to \"body\" or \"header\"\n- **API Examples**: AWS, Google Cloud, modern REST APIs\n- **When to Use**: APIs that provide continuation tokens/cursors\n\n**Link Header Pagination (`\"linkheader\"`)**\n```\n\"method\": \"linkheader\"\n```\n- **Implementation**: Follows URLs in HTTP Link headers automatically\n- **Required Setup**: None (simplest to configure)\n- **Common Fields**: Usually none (automatic)\n- **API Examples**: GitHub, GitLab, any API following RFC 5988\n- **When to Use**: APIs that return Link headers with rel=\"next\"\n\n**Next Page url (`\"nextpageurl\"`)**\n```\n\"method\": \"nextpageurl\"\n```\n- **Implementation**: Uses complete URLs returned in response body\n- **Required Setup**: Set path to location of next URL in response\n- **API Examples**: Some social media APIs, GraphQL implementations\n- **When to Use**: APIs that include complete next page URLs in responses\n\n**Custom relative URI (`\"relativeuri\"`)**\n```\n\"method\": \"relativeuri\"\n```\n- **Implementation**: Builds custom URIs based on previous responses\n- **Required Setup**: Configure relativeURI with handlebars templates\n- **When to Use**: Non-standard pagination requiring custom logic\n\n**Custom Request Body (`\"body\"`)**\n```\n\"method\": \"body\"\n```\n- **Implementation**: Creates custom request bodies for pagination\n- **Required Setup**: Configure body with handlebars templates\n- **API Examples**: GraphQL, SOAP, RPC APIs\n- **When to Use**: APIs requiring POST requests with pagination in body\n\n**Selection guidance**\n\nTo determine the correct method:\n1. Check the API documentation for pagination instructions\n2. Look for examples of multi-page requests in API samples\n3. Test with a small request to observe pagination mechanics\n4. Choose the method matching the API's expected behavior\n\nIMPORTANT: Using the wrong pagination method will result in either errors or incomplete data retrieval.\n","enum":["linkheader","page","skip","token","nextpageurl","relativeuri","body"]},"page":{"type":"integer","description":"Specifies the starting page number for page-based pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"page\"\n- OPTIONAL: Defaults to 0 if not provided\n- COMMON VALUES: 1 (most APIs), 0 (zero-indexed APIs)\n\n**Implementation guidance**\n\nThis field should be set when the API's first page is not zero-indexed. Most APIs use 1 as \ntheir first page number, in which case you should set:\n\n```json\n{\n  \"method\": \"page\",\n  \"page\": 1\n}\n```\n\nThe system will automatically increment this value for each subsequent page request.\n\n**Examples**\n\n- Shopify uses page=1 for first page\n- Some GraphQL APIs use page=0 for first page\n"},"skip":{"type":"integer","description":"Specifies the starting offset value for offset/skip-based pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"skip\"\n- OPTIONAL: Defaults to 0 if not provided\n- COMMON VALUES: 0 (vast majority of APIs)\n\n**Implementation guidance**\n\nThis field rarely needs to be set since most APIs use 0 as the starting offset.\nThe system will automatically increment this value by the pageSize for each subsequent request.\n\nExample calculation for page transitions:\n- First page: offset=0 (or your configured value)\n- Second page: offset=pageSize\n- Third page: offset=pageSize*2\n\n**When to use**\n\nOnly set this if the API requires a non-zero starting offset value, which is very uncommon.\n"},"token":{"type":"string","description":"Specifies an initial token value for token-based pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"token\"\n- OPTIONAL: Leave empty for normal pagination from the beginning\n- ADVANCED USE ONLY: Most implementations should NOT set this\n\n**Implementation guidance**\n\nToken-based pagination normally works by:\n1. Making the first request with no token\n2. Extracting a token from the response (using the path field)\n3. Using that token for the next request\n\nThis field should ONLY be set in rare scenarios:\n- Resuming a previous pagination sequence from a known token\n- APIs that require a token value even for the first request\n- Testing specific pagination scenarios\n\n**Example scenarios**\n\n```json\n// To resume pagination from a specific point:\n{\n  \"method\": \"token\",\n  \"path\": \"meta.nextToken\",\n  \"pathLocation\": \"body\",\n  \"token\": \"eyJwYWdlIjozfQ==\"\n}\n\n// For APIs requiring an initial token:\n{\n  \"method\": \"token\",\n  \"path\": \"pagination.nextToken\",\n  \"pathLocation\": \"body\",\n  \"token\": \"start\"\n}\n```\n"},"path":{"type":"string","description":"Specifies the location of pagination information in API responses.\n\n**Field behavior**\n\nThis field has different requirements based on the pagination method:\n\n- REQUIRED for method=\"token\":\n  Indicates where to find the token for the next page\n\n- REQUIRED for method=\"nextpageurl\":\n  Indicates where to find the complete URL for the next page\n\n- NOT USED for other pagination methods\n\n**Implementation guidance**\n\n**For token-based pagination (method=\"token\")**\n\n1. When pathLocation=\"body\":\n    - Set to a JSON path that points to the token in the response body\n    - Uses dot notation to navigate JSON objects\n    \n    Example response:\n    ```json\n    {\n      \"data\": [...],\n      \"meta\": {\n        \"nextToken\": \"abc123\"\n      }\n    }\n    ```\n    Correct path: \"meta.nextToken\"\n\n2. When pathLocation=\"header\":\n    - Set to the exact name of the HTTP header containing the token\n    - Case-sensitive, must match the header exactly\n    \n    Example header:\n    ```\n    X-Pagination-Token: abc123\n    ```\n    Correct path: \"X-Pagination-Token\"\n\n**For next page url pagination (method=\"nextpageurl\")**\n\n- Set to a JSON path that points to the complete URL in the response\n\nExample response:\n```json\n{\n  \"data\": [...],\n  \"pagination\": {\n    \"next_url\": \"https://api.example.com/data?page=2\"\n  }\n}\n```\nCorrect path: \"pagination.next_url\"\n\n**Common error** PATTERNS\n\n1. Missing dot notation: \"meta.nextToken\" not \"meta/nextToken\"\n2. Incorrect case: \"Meta.NextToken\" when API returns \"meta.nextToken\"\n3. Missing array indices when needed: \"items[0].next\" not \"items.next\"\n"},"pathLocation":{"type":"string","description":"Specifies where to find the pagination token in the API response.\n\n**Field behavior**\n\n- REQUIRED for method=\"token\"\n- NOT USED for other pagination methods\n- LIMITED to two possible values: \"body\" or \"header\"\n\n**Implementation guidance**\n\nWhen using token-based pagination, you must:\n1. Set method=\"token\"\n2. Set path to locate the token\n3. Set pathLocation to indicate where the token is found\n\n**When to use** \"body\":\n\nSet to \"body\" when the token is contained in the JSON response body.\nThis is the most common scenario for modern APIs.\n\nExample configuration:\n```json\n{\n  \"method\": \"token\",\n  \"path\": \"metadata.nextToken\",\n  \"pathLocation\": \"body\"\n}\n```\n\n**When to use \"header\"**\n\nSet to \"header\" when the token is returned as an HTTP header.\n\nExample configuration:\n```json\n{\n  \"method\": \"token\",\n  \"path\": \"X-Next-Page-Token\",\n  \"pathLocation\": \"header\" \n}\n```\n\n**Dependency chain**\n\nThis field participates in a critical dependency chain:\n\n1. Set method=\"token\"\n2. Set pathLocation=\"body\" or \"header\"\n3. Set path to token location based on pathLocation value\n4. Add {{export.http.paging.token}} to URI or body parameters\n\nAll four elements must be properly configured for token pagination to work.\n","enum":["body","header"]},"pathAfterFirstRequest":{"type":"string","description":"Specifies an alternative path for token extraction after the first page request.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"token\"\n- OPTIONAL: Only needed when token location changes after first page\n- Uses same format as the path field (JSON path or header name)\n\n**Implementation guidance**\n\nThis field should only be set when the API changes its response structure between\nthe first page and subsequent pages. Most APIs maintain consistent structure, but\nsome APIs may:\n\n1. Use different response formats for first vs. subsequent pages\n2. Move the token to a different location after the initial response\n3. Change the field name for the token in follow-up responses\n\nExample scenario where this is needed:\n```json\n// First page response:\n{\n  \"data\": [...],\n  \"meta\": {\n    \"initialNextToken\": \"abc123\"\n  }\n}\n\n// Subsequent page responses:\n{\n  \"data\": [...],\n  \"pagination\": {\n    \"nextToken\": \"def456\"\n  }\n}\n```\n\nIn this case:\n- path = \"meta.initialNextToken\" (for first page)\n- pathAfterFirstRequest = \"pagination.nextToken\" (for subsequent pages)\n\n**Dependency chain**\n\nThis field works in conjunction with the main path field:\n1. First request: token is extracted using the path field\n2. Subsequent requests: token is extracted using pathAfterFirstRequest\n\nIMPORTANT: Only set this field if you've verified that the API actually changes\nits response structure. Setting it unnecessarily can cause pagination to fail.\n"},"relativeURI":{"type":"string","description":"Override relative URI for subsequent page requests. This field appears as \"Override relative URI for subsequent page requests\" in the UI.\n\nThis field only needs to be set if subsequent page requests require a different relative URI than what is configured in the primary relative URI field. Most APIs use the same endpoint for all pages and vary only the query parameters, but some may require a completely different path for subsequent requests.\n\nYou can use handlebars expressions to reference data from the previous API response using the `previous_page` context object, which contains:\n\n- `previous_page.full_response` - The entire JSON response body from the previous request\n- `previous_page.last_record` - The last record from the previous page of results\n- `previous_page.headers` - All HTTP headers from the previous response\n\nCommon patterns include:\n- `{{previous_page.full_response.next_page}}` - Use a complete next page URL returned by the API\n- `/customers?page={{previous_page.full_response.page_count}}` - Use a page number from the response\n- `/orders?cursor={{previous_page.full_response.next_cursor}}` - Use a cursor/token from the response\n\nThe exact structure of data available depends on your specific API's response format.\n\nLeave this field empty if the main relative URI can be used for all page requests.\n"},"body":{"type":"string","description":"Override HTTP request body for subsequent page requests. This field appears as \"Override HTTP request body for subsequent page requests\" in the UI.\n\nThis field only needs to be set if subsequent page requests require a different HTTP request body than what is configured in the primary HTTP request body field. Most APIs use query parameters for pagination, but some (especially GraphQL or SOAP APIs) may require pagination parameters to be sent in the request body.\n\nYou can use handlebars expressions to reference data from the previous API response using the `previous_page` context object, which contains:\n\n- `previous_page.full_response` - The entire JSON response body from the previous request\n- `previous_page.last_record` - The last record from the previous page of results\n- `previous_page.headers` - All HTTP headers from the previous response\n\nCommon patterns include:\n- Including the next cursor in a GraphQL query: `{\"query\": \"...\", \"variables\": {\"cursor\": \"{{previous_page.full_response.pageInfo.endCursor}}\"}}`\n- Using the last record's ID: `{\"after\": \"{{previous_page.last_record.id}}\", \"limit\": 100}`\n- Including a page number: `{\"page\": {{previous_page.full_response.meta.next_page}}, \"pageSize\": 50}`\n\nThe exact structure of data available depends on your specific API's response format.\n\nLeave this field empty if the main HTTP request body can be used for all page requests.\n"},"linkHeaderRelation":{"type":"string","description":"Specifies which relation in the Link header to use for pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"linkheader\"\n- OPTIONAL: Defaults to \"next\" if not provided\n- Case-sensitive value matching the rel attribute in Link header\n\n**Implementation guidance**\n\nLink header pagination follows the RFC 5988 standard where pagination links\nare provided in HTTP headers. A typical Link header looks like:\n\n```\nLink: <https://api.example.com/items?page=2>; rel=\"next\", <https://api.example.com/items?page=1>; rel=\"prev\"\n```\n\nThis field allows you to specify which relation type to follow for pagination:\n\n```\n\"linkHeaderRelation\": \"next\"  // Default value\n```\n\nSome APIs use non-standard relation names, which is when you'd need to change this:\n\n```\n\"linkHeaderRelation\": \"successor\"  // Custom relation name\n```\n\n**Common values**\n\n- \"next\" (default): Standard for most RFC 5988 compliant APIs\n- \"successor\": Alternative used by some APIs\n- \"forward\": Alternative used by some APIs\n- \"nextpage\": Non-standard but used by some implementations\n\nIMPORTANT: This is case-sensitive and must exactly match the relation value in\nthe Link header. If the API includes the prefix \"rel=\" in the header, do NOT\ninclude it here.\n"},"resourcePath":{"type":"string","description":"Override path to records for subsequent page requests. This field appears as \"Override path to records for subsequent page requests\" in the UI.\n\nThis field only needs to be set if subsequent page requests return a different response structure, and the records are located in a different place than the original request.\n\nFor example, if the first request returns records in a structure like {\"data\": [...]} but subsequent page responses have records in {\"results\": [...]} instead, you would set this field to \"results\" to correctly extract data from the follow-up pages.\n\nLeave this field empty if all pages use the same response structure.\n"},"lastPageStatusCode":{"type":"integer","description":"Specifies a custom HTTP status code that indicates the last page of results.\n\n**Field behavior**\n\n- OPTIONAL: Only needed for APIs with non-standard last page indicators\n- Applies to all pagination methods\n- Overrides the default 404 end-of-pagination detection\n\n**Implementation guidance**\n\nBy default, the system treats a 404 status code as an indicator that\npagination is complete. This field allows you to specify a different\nstatus code if your API uses an alternative convention.\n\nCommon scenarios where this is needed:\n\n1. APIs that return 204 (No Content) for empty result sets\n```\n\"lastPageStatusCode\": 204\n```\n\n2. APIs that return 400 (Bad Request) when requesting beyond available pages\n```\n\"lastPageStatusCode\": 400\n```\n\n3. APIs with custom error codes for pagination completion\n```\n\"lastPageStatusCode\": 499\n```\n\n**Technical details**\n\nWhen this status code is received, the system:\n- Stops the pagination process\n- Considers the data collection complete\n- Does not treat the response as an error\n- Does not attempt to process any response body\n\nIMPORTANT: Only set this if your API explicitly uses a non-404 status code\nto indicate the end of pagination. Setting this incorrectly could cause\npremature termination of data collection or error handling issues.\n"},"lastPagePath":{"type":"string","description":"Specifies a JSON path to a field that indicates the end of pagination.\n\n**Field behavior**\n\n- OPTIONAL: Only needed for APIs with field-based pagination completion signals\n- Works with all pagination methods\n- Used in conjunction with lastPageValues\n- JSON path notation to a field in the response body\n\n**Implementation guidance**\n\nThis field is used when an API indicates the last page through a field\nin the response body rather than using HTTP status codes. The system\nchecks this path in each response to determine if pagination is complete.\n\nCommon patterns include:\n\n1. Boolean flag fields\n```\n\"lastPagePath\": \"meta.isLastPage\"\n```\n\n2. \"Has more\" indicators\n```\n\"lastPagePath\": \"pagination.hasMore\"\n```\n\n3. Cursor/token fields that are null/empty on the last page\n```\n\"lastPagePath\": \"meta.nextCursor\"\n```\n\n4. Error message fields\n```\n\"lastPagePath\": \"error.message\"\n```\n\n**Dependency chain**\n\nThis field must be used with lastPageValues, which specifies the value(s)\nat this path that indicate pagination is complete. For example:\n\n```json\n\"lastPagePath\": \"pagination.hasMore\",\n\"lastPageValues\": [\"false\", \"0\"]\n```\n\nIMPORTANT: The path is evaluated against each response using JSON path notation.\nIf the path doesn't exist in the response, the condition is not considered met.\n"},"lastPageValues":{"type":"array","description":"Specifies which value(s) at the lastPagePath indicate the end of pagination.\n\n**Field behavior**\n\n- REQUIRED when lastPagePath is used\n- Array of string values (even for boolean or numeric comparisons)\n- Case-sensitive matching against the value at lastPagePath\n- Multiple values create an OR condition (any match indicates last page)\n\n**Implementation guidance**\n\nThis field works in conjunction with lastPagePath to determine when\npagination is complete. The system looks for the field specified by\nlastPagePath and compares its value against each entry in this array.\n\nCommon patterns include:\n\n1. For boolean \"isLastPage\" flags (true means last page)\n```json\n\"lastPagePath\": \"meta.isLastPage\",\n\"lastPageValues\": [\"true\"]\n```\n\n2. For \"hasMore\" flags (false means last page)\n```json\n\"lastPagePath\": \"pagination.hasMore\",\n\"lastPageValues\": [\"false\", \"0\"]\n```\n\n3. For empty cursors (null/empty string means last page)\n```json\n\"lastPagePath\": \"meta.nextCursor\",\n\"lastPageValues\": [\"null\", \"\"]\n```\n\n4. For specific error messages\n```json\n\"lastPagePath\": \"error.message\",\n\"lastPageValues\": [\"No more pages\", \"End of results\"]\n```\n\n**Technical details**\n\n- All values must be specified as strings, even for boolean or numeric comparisons\n- JSON null should be represented as the string \"null\"\n- Empty string is represented as \"\"\n- The comparison is exact and case-sensitive\n\nIMPORTANT: This field is only considered when the lastPagePath exists in the\nresponse. Both lastPagePath and lastPageValues must be configured correctly\nfor proper pagination termination.\n","items":{"type":"string"}},"maxPagePath":{"type":"string","description":"Specifies a JSON path to a field containing the total number of pages available.\n\n**Field behavior**\n\n- OPTIONAL: Only relevant for \"page\" and \"skip\" pagination methods\n- JSON path to a numeric field in the response\n- Used to optimize pagination by detecting the last page early\n- Ignored for other pagination methods\n\n**Implementation guidance**\n\nThis field enables pagination optimization when an API includes metadata\nabout the total number of pages. When configured, the system:\n\n1. Extracts the total page count from each response\n2. Compares the current page number against this total\n3. Stops pagination when the maximum page is reached\n\nCommon API response patterns include:\n\n```json\n// Pattern 1: Metadata section with page counts\n{\n  \"data\": [...],\n  \"meta\": {\n    \"totalPages\": 5,\n    \"currentPage\": 2\n  }\n}\n\n// Pattern 2: Pagination object\n{\n  \"results\": [...],\n  \"pagination\": {\n    \"pageCount\": 5,\n    \"page\": 2\n  }\n}\n\n// Pattern 3: Root level pagination info\n{\n  \"items\": [...],\n  \"pages\": 5,\n  \"current\": 2\n}\n```\n\n**Usage scenarios**\n\nMost useful when:\n- The API reliably includes total page counts\n- You want to prevent unnecessary requests after the last page\n- The 404/last page detection mechanisms aren't suitable\n\nIMPORTANT: This field should point to the TOTAL number of pages,\nnot the current page number. The value must be numeric (integer).\n"},"maxCountPath":{"type":"string","description":"Specifies a JSON path to a field containing the total number of records available.\n\n**Field behavior**\n\n- OPTIONAL: Only relevant for \"page\" and \"skip\" pagination methods\n- JSON path to a numeric field in the response\n- Alternative to maxPagePath for record-based termination\n- Used when APIs provide total record count instead of page count\n\n**Implementation guidance**\n\nThis field enables pagination optimization when an API includes metadata\nabout the total number of records rather than pages. When configured,\nthe system:\n\n1. Extracts the total record count from each response\n2. Tracks the total number of records processed so far\n3. Stops pagination when all records have been processed\n\nCommon API response patterns include:\n\n```json\n// Pattern 1: Metadata section with record counts\n{\n  \"data\": [...],\n  \"meta\": {\n    \"totalCount\": 42,\n    \"page\": 2,\n    \"pageSize\": 10\n  }\n}\n\n// Pattern 2: Pagination object\n{\n  \"results\": [...],\n  \"pagination\": {\n    \"total\": 42,\n    \"offset\": 20,\n    \"limit\": 10\n  }\n}\n\n// Pattern 3: Root level count info\n{\n  \"items\": [...],\n  \"count\": 42,\n  \"page\": 2\n}\n```\n\n**Relationship with maxPagePath**\n\nThis field is an alternative to maxPagePath:\n- Use maxPagePath when the API provides a total page count\n- Use maxCountPath when the API provides a total record count\n- If both are provided, maxPagePath takes precedence\n\nIMPORTANT: This field should point to the TOTAL number of records,\nnot the number of records in the current page. The value must be\nnumeric (integer).\n"}}},"response":{"type":"object","description":"Configuration for parsing and interpreting HTTP responses returned by the source API.\n\nThis object tells the export engine how to extract records from the API response body\nand how to detect success or failure at the response level.\n\n**Most important field:** resourcePath\n\n`resourcePath` is the single most commonly needed field in this object. When an API\nwraps its records inside a JSON envelope, you MUST set resourcePath to the dot-path\nthat points to the array of records. Without it, the export treats the entire response\nas a single record.\n\nExample API response:\n```json\n{\n  \"status\": \"ok\",\n  \"data\": {\n    \"customers\": [\n      {\"id\": 1, \"name\": \"Alice\"},\n      {\"id\": 2, \"name\": \"Bob\"}\n    ]\n  }\n}\n```\n→ Set `resourcePath` to `data.customers` so the export produces 2 records.\n\n**When to leave this object undefined**\n\nIf the API returns a bare JSON array (e.g. `[{\"id\":1}, {\"id\":2}]`) with no\nwrapper object, you do not need this object at all.\n","properties":{"resourcePath":{"type":"string","description":"The dot-separated path to the array of records inside the API response body.\n\n**Critical field for correct data extraction**\n\nMost APIs wrap their data in an envelope object. This field tells the export\nwhere to find the actual records within that envelope. Without this field,\nthe export treats the entire response body as a single record, which is\nalmost never the desired behavior when the response has a wrapper.\n\n**How it works**\n\nGiven an API response like:\n```json\n{\n  \"meta\": {\"page\": 1, \"total\": 42},\n  \"results\": [\n    {\"id\": \"A\", \"value\": 10},\n    {\"id\": \"B\", \"value\": 20}\n  ]\n}\n```\nSetting `resourcePath` to `results` causes the export to produce 2 records\n(`{\"id\":\"A\",\"value\":10}` and `{\"id\":\"B\",\"value\":20}`).\n\nFor deeply nested responses:\n```json\n{\n  \"slideshow\": {\n    \"slides\": [{\"title\": \"Slide 1\"}, {\"title\": \"Slide 2\"}]\n  }\n}\n```\nSet `resourcePath` to `slideshow.slides` to get each slide as a record.\n\n**When to set this field**\n\n- The API response is a JSON object (not a bare array) and the records are\n  nested inside it → set this to the path\n- The API response is a bare JSON array → leave undefined (records are\n  already at the top level)\n\n**Common patterns**\n\n| API response structure | resourcePath value |\n|---|---|\n| `{\"data\": [...]}` | `data` |\n| `{\"results\": [...]}` | `results` |\n| `{\"items\": [...]}` | `items` |\n| `{\"records\": [...]}` | `records` |\n| `{\"response\": {\"data\": [...]}}` | `response.data` |\n| `{\"slideshow\": {\"slides\": [...]}}` | `slideshow.slides` |\n| `[...]` (bare array) | leave undefined |\n\n**Important distinction**\n\nThis field extracts records from the **API response**. Do NOT confuse it with:\n- `oneToMany` + `pathToMany` — which unwrap child arrays from *input records*\n  in lookup/import steps (a completely different mechanism)\n- `paging.resourcePath` — which overrides the record location for *subsequent*\n  page requests only (when follow-up pages use a different response structure)\n"},"resourceIdPath":{"type":"string","description":"Path to the unique identifier field within each individual record in the response.\n\nUsed primarily when processing results of asynchronous import responses.\nIf not specified, the system looks for standard `id` or `_id` fields automatically.\n"},"successPath":{"type":"string","description":"Path to a field in the response that indicates whether the API call succeeded.\n\nUse this when the API returns HTTP 200 for all requests but signals success or\nfailure through a field in the response body.\n\nMust be used together with `successValues` to define which values at this path\nindicate success.\n\nExample: If the API returns `{\"status\": \"ok\", \"data\": [...]}`, set\n`successPath` to `status` and `successValues` to `[\"ok\"]`.\n"},"successValues":{"type":"array","items":{"type":"string"},"description":"Values at the `successPath` location that indicate the API call was successful.\n\nWhen the value at `successPath` matches any entry in this array, the response\nis treated as successful. If the value does not match, the response is treated\nas an error.\n\nAll values are compared as strings. For boolean fields, use `\"true\"` or `\"false\"`.\n"},"errorPath":{"type":"string","description":"Path to the error message field in the response body.\n\nUsed to extract a meaningful error message when the API returns an error\nresponse. The value at this path is included in error logs and error records.\n"},"failPath":{"type":"string","description":"Path to a field that identifies a failed response even when the HTTP status code is 200.\n\nSimilar to `successPath` but inverted logic — checks for failure indicators.\nMust be used together with `failValues`.\n"},"failValues":{"type":"array","items":{"type":"string"},"description":"Values at the `failPath` location that indicate the API call failed.\n\nWhen the value at `failPath` matches any entry in this array, the response\nis treated as a failure even if the HTTP status code was 200.\n"},"blobFormat":{"type":"string","description":"Character encoding for blob export responses.\n\nOnly relevant when the export type is \"blob\" (http.type = \"file\" or\nexport type = \"blob\"). Specifies how to decode the binary response body.\n","enum":["utf8","ucs2","utf-16le","ascii","binary","base64","hex"]}}},"_httpConnectorVersionId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector version being used."},"_httpConnectorResourceId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector resource being used."},"sendAuthForFileDownloads":{"type":"boolean","description":"Whether to include authentication headers when downloading files."},"_httpConnectorEndpointId":{"type":"string","format":"objectId","description":"_httpConnectorEndpointId: Identifier for the HTTP connector endpoint used in the integration or API call. This identifier uniquely specifies which HTTP connector endpoint configuration should be utilized to route and process the request within the system. It ensures that API calls are directed to the correct external or internal HTTP service endpoint as defined in the integration setup.\n\n**Field behavior**\n- Uniquely identifies the HTTP connector endpoint within the system to ensure precise routing.\n- Used during the execution of integrations or API calls to select the appropriate HTTP endpoint configuration.\n- Typically required and must be specified when configuring or invoking HTTP-based connectors.\n- Once set, it generally remains immutable to maintain consistent routing behavior.\n- Acts as a key reference to the endpoint’s configuration, including URL, headers, authentication, and other settings.\n\n**Implementation guidance**\n- Must correspond to a valid and existing HTTP connector endpoint identifier within the system.\n- Validate the identifier against the current list of configured HTTP connector endpoints before use to prevent errors.\n- Avoid changing the identifier after initial assignment to prevent routing inconsistencies.\n- Ensure that the endpoint configuration referenced by this ID is active and properly configured.\n- Implement error handling for cases where the identifier does not match any existing endpoint.\n\n**Examples**\n- \"endpoint_12345\"\n- \"httpConnectorEndpoint_abcde\"\n- \"prodHttpEndpoint01\"\n- \"dev_api_gateway_2024\"\n- \"externalServiceEndpoint-01\"\n\n**Important notes**\n- This field is critical for directing API calls to the correct HTTP endpoint; incorrect values can cause failed connections or routing errors.\n- Missing or invalid identifiers will typically result in integration failures or exceptions.\n- Proper permissions and access controls must be in place to use the specified HTTP connector endpoint.\n- Changes to the endpoint configuration referenced by this ID may affect all integrations relying on it.\n- The identifier should be managed carefully in deployment and version control processes.\n\n**Dependency chain**\n- Depends on the existence and configuration of HTTP connector endpoints within the system.\n- May be linked to authentication, authorization, and security settings associated with the endpoint.\n- Integration logic or API call workflows depend on this identifier to resolve the correct HTTP endpoint.\n- Changes in endpoint configurations or identifiers may require updates in dependent integrations.\n\n**Technical details**\n- Data type: String\n- Format: Alphanumeric identifier, typically allowing underscores (_), dashes (-), and possibly other URL-safe characters.\n- Stored as a reference or foreign key linking to the HTTP connector"}}},"Salesforce":{"type":"object","description":"Configuration object for Salesforce data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a Salesforce connection\nand must not be included for other connection types. It defines how data is extracted\nfrom Salesforce, either through queries or real-time events.\n\nFor optimal AI agent implementation, consider these guidelines:\n\n**Salesforce export modes**\n\nSalesforce exports offer two fundamentally different operating modes:\n\n1. **SOQL Query-based Exports** (type=\"soql\")\n    - Scheduled or on-demand batch processing\n    - Uses SOQL queries to retrieve data\n    - Supports both REST and Bulk API\n    - Can be configured as lookups (isLookup=true)\n    - Requires the \"soql\" object with query configuration\n\n2. **Real-time Event Listeners** (type=\"distributed\")\n    - Responds to Salesforce events as they happen\n    - Uses Salesforce's streaming API and platform events\n    - Always appears as a \"Listener\" in the flow builder UI\n    - Requires the \"distributed\" object with event configuration\n\n3. **File/Blob Exports** (when export.type=\"blob\")\n    - Retrieves files stored in Salesforce\n    - Requires sObjectType and id fields\n    - Supports Attachments, ContentVersion, and Document objects\n\n**Implementation requirements**\n\nThe salesforce object has conditional requirements based on the selected type:\n\n- For SOQL exports (type=\"soql\"):\n  Required fields: type, soql.query\n  Optional fields: api, includeDeletedRecords, bulk (when api=\"bulk\")\n\n- For Distributed exports (type=\"distributed\"):\n  Required fields: type, distributed configuration\n  Optional fields: distributed.referencedFields, distributed.qualifier\n\n- For Blob exports (when export.type=\"blob\"):\n  Required fields: sObjectType, id\n","properties":{"type":{"type":"string","description":"Defines the fundamental data extraction method for Salesforce exports.\n\n    **Field behavior**\n\nThis field determines the core operating mode of the Salesforce export:\n\n- REQUIRED for all Salesforce exports\n- Controls which additional configuration objects must be provided\n- Affects how the export appears and functions in the flow builder UI\n- Cannot be changed after creation without significant reconfiguration\n\n**Available types**\n\n**Soql Query-based Export**\n```\n\"type\": \"soql\"\n```\n\n- **Behavior**: Executes SOQL queries against Salesforce on schedule or demand\n- **UI Appearance**: \"Export\" or \"Lookup\" based on isLookup value\n- **Required Config**: Must provide the \"soql\" object with a valid query\n- **Use Cases**: Batch data extraction, delta synchronization, data migration\n- **Dependencies**:\n  - Compatible with both \"rest\" and \"bulk\" API options\n  - Works with standard, delta, test, and once export types\n\n**Real-time Event Listener**\n```\n\"type\": \"distributed\"\n```\n\n- **Behavior**: Listens for real-time Salesforce events (create/update/delete)\n- **UI Appearance**: Always appears as a \"Listener\" in the flow builder\n- **Required Config**: Must provide the \"distributed\" object with event configuration\n- **Use Cases**: Real-time synchronization, event-driven integration\n- **Dependencies**:\n  - Only uses REST API (api field is ignored)\n  - Automatically configured with trigger logic in Salesforce\n  - Only compatible with standard export type (ignores delta/test/once)\n\n**Implementation considerations**\n\nThe type selection creates a fundamental difference in how data flows:\n\n- \"soql\" operates on a pull model where the integration initiates data retrieval\n- \"distributed\" operates on a push model where Salesforce events trigger the integration\n\nIMPORTANT: Choose \"soql\" for batch processing and lookups; choose \"distributed\" for\nreal-time event handling. This decision affects all other configuration aspects.\n","enum":["soql","distributed"]},"sObjectType":{"type":"string","description":"Specifies the Salesforce object type for the export operation.\n\n**Field behavior**\n\nThis field determines which Salesforce object is being exported:\n\n- **REQUIRED** when the parent export's type is \"distributed\"\n- **REQUIRED** when the parent export's type is \"blob\"\n- Optional for \"soql\" exports (can be inferred from the SOQL query)\n- Must be a valid Salesforce object API name\n\n**Use cases by export type**\n\n**Distributed Exports**\n```\n\"sObjectType\": \"Account\"\n\"sObjectType\": \"Contact\"\n\"sObjectType\": \"Opportunity\"\n\"sObjectType\": \"Custom_Object__c\"\n```\n\n- **Purpose**: Specifies the primary object type being exported from Salesforce\n- **Valid Values**: Any standard or custom Salesforce object (Account, Contact, Opportunity, Lead, Case, Custom_Object__c, etc.)\n- **API Access**: Uses the specified object's metadata and SOQL/REST APIs\n- **Use Cases**: Real-time distributed processing of Salesforce records\n- **Requirements**: Object must exist in the connected Salesforce org\n\n**Blob/File Exports**\n```\n\"sObjectType\": \"Attachment\"\n\"sObjectType\": \"ContentVersion\"\n\"sObjectType\": \"Document\"\n```\n\n- **Purpose**: Specifies which Salesforce file storage object contains the file data\n- **Valid Values**: File storage objects only (Attachment, ContentVersion, Document)\n- **API Access**: Uses file-specific APIs for data retrieval\n- **Use Cases**: Extracting files and binary data from Salesforce\n- **Requirements**: Must be used with the \"id\" field to specify the file record\n\n**Soql Exports**\n```\n\"sObjectType\": \"Account\"  // Optional - can be inferred from query\n```\n\n- **Purpose**: Optional hint about the primary object in the SOQL query\n- **Valid Values**: Any Salesforce object referenced in the query\n- **Use Cases**: Query optimization and metadata context\n\n**Implementation notes**\n\nFor distributed exports, this field is essential for:\n- Setting up proper event listeners and triggers\n- Configuring field metadata and validation\n- Enabling related object processing\n- Determining appropriate API endpoints\n\nFor blob exports, this field works with the \"id\" field to retrieve specific file records.\n\nIMPORTANT: The object specified must exist in the target Salesforce org and be accessible\nto the integration user account.\n"},"id":{"type":"string","description":"Specifies the Salesforce record ID of the file to retrieve for blob exports.\n\n**Field behavior**\n\nThis field identifies the specific file record in Salesforce:\n\n- REQUIRED when the parent export's type is \"blob\"\n- Must be a valid Salesforce ID or a handlebars expression\n- Used in conjunction with sObjectType to retrieve the file\n- Not used for regular data exports (type=\"soql\" or \"distributed\")\n\n**Implementation patterns**\n\n**Static File id**\n```\n\"id\": \"00P5f00000ZQcTZEA1\"\n```\n\n- References a specific, fixed file in Salesforce\n- Useful for retrieving standard documents or templates\n- Always retrieves the same file on each execution\n- Simple to configure but lacks flexibility\n\n**Dynamic File id (Handlebars)**\n```\n\"id\": \"{{record.Attachment_Id__c}}\"\n```\n\n- References a file ID from input data using handlebars\n- Requires the export to be used as a lookup (isLookup=true)\n- Dynamically determines which file to retrieve at runtime\n- Allows for contextual file retrieval based on previous steps\n\n**Technical details**\n\n- For ContentVersion objects, this should be the ContentVersion ID\n- For Attachment objects, this should be the Attachment ID\n- For Document objects, this should be the Document ID\n\nIMPORTANT: Salesforce IDs are 15 or 18 characters, case-sensitive for 15-character\nversions, and case-insensitive for 18-character versions. When using handlebars,\nensure the referenced field contains a valid Salesforce ID.\n"},"includeDeletedRecords":{"type":"boolean","description":"Controls whether the export retrieves records from the Salesforce Recycle Bin.\n\n**Field behavior**\n\nThis field enables access to recently deleted records:\n\n- OPTIONAL: Defaults to false if not specified\n- Only relevant for SOQL exports (type=\"soql\")\n- Ignored for distributed exports and blob exports\n- Changes the underlying API method used for queries\n\n**Implementation impact**\n\nWhen set to true:\n- Salesforce's queryAll() API method is used instead of query()\n- Records in the Recycle Bin (deleted within the past 15 days) are included\n- Each record contains an \"IsDeleted\" field to identify deleted status\n- API usage may be higher as queryAll() counts differently against limits\n\n**Use cases**\n\nThis field is particularly useful for:\n\n- Synchronizing deletion operations to target systems\n- Building data recovery/rollback mechanisms\n- Maintaining a complete audit trail including deleted records\n- Implementing soft-delete patterns across integrated systems\n\n**Technical considerations**\n\n- Records in the Recycle Bin are only available for up to 15 days\n- Hard-deleted records (emptied from Recycle Bin) are not accessible\n- The IsDeleted field should be checked to identify deleted records\n- May increase response size and processing time slightly\n\nIMPORTANT: This feature only works with SOQL exports (type=\"soql\") and is ignored\nfor distributed exports (type=\"distributed\") since those operate on events rather\nthan queries.\n","default":false},"api":{"type":"string","description":"Specifies which Salesforce API to use for retrieving data.\n\n**Field behavior**\n\nThis field controls the underlying API technology:\n\n- OPTIONAL: Defaults to \"rest\" if not specified\n- Only relevant for SOQL exports (type=\"soql\")\n- Ignored for distributed exports and blob exports\n- Determines performance characteristics and compatibility\n\n**Available APIs**\n\n**Rest api**\n```\n\"api\": \"rest\"\n```\n\n- **Performance**: Optimized for immediate response and smaller datasets\n- **Concurrency**: Higher - multiple queries can run simultaneously\n- **Data Volume**: Best for <10,000 records\n- **Use Cases**: Lookups, real-time queries, smaller datasets\n- **Special Features**: Required for lookup exports (isLookup=true)\n\n**Bulk api 2.0**\n```\n\"api\": \"bulk\"\n```\n\n- **Performance**: Optimized for large data volumes, higher throughput\n- **Concurrency**: Lower - utilizes a job queuing system\n- **Data Volume**: Best for >=10,000 records\n- **Use Cases**: Large data migrations, full dataset exports, reports\n- **Special Features**: Requires \"bulk\" object configuration for settings\n\n**Dependencies and constraints**\n\n- When isLookup=true, api must be set to \"rest\" (or left as default)\n- When api=\"bulk\", the bulk object can be configured for additional options\n- Bulk API introduces slight processing latency but handles larger volumes\n- REST API provides immediate results but may time out with very large queries\n\n**Selection guidance**\n\nChoose based on your data volume and response time needs:\n\n- For smaller datasets (<10,000 records) or lookups: use \"rest\"\n- For larger datasets or background processing: use \"bulk\"\n- When immediacy is critical: use \"rest\"\n- When throughput is critical: use \"bulk\"\n\nIMPORTANT: The Bulk API is not compatible with lookup exports (isLookup=true).\nIf your export is configured as a lookup, you must use the REST API.\n","enum":["rest","bulk"]},"bulk":{"type":"object","description":"Configuration parameters for Salesforce Bulk API 2.0 exports.\n\n**Field behavior**\n\nThis object contains settings specific to Bulk API operations:\n\n- REQUIRED when api=\"bulk\" and type=\"soql\"\n- Ignored when api=\"rest\" or type=\"distributed\"\n- Controls behavior of Salesforce Bulk API jobs\n- Provides optimization options for large data volumes\n\n**Implementation context**\n\nThe Bulk API operates differently from REST API:\n- Creates asynchronous jobs in Salesforce\n- Processes records in batches for higher throughput\n- Optimized for transferring large datasets\n- Has different governor limits and behavior\n","properties":{"maxRecords":{"type":"integer","description":"Specifies the maximum number of records to retrieve in a single Bulk API job.\n\n**Field behavior**\n\nThis field controls query result size:\n\n- OPTIONAL: Uses Salesforce's default if not specified\n- Sets the `maxRecords` parameter on Bulk API requests\n- Only applicable when api=\"bulk\" and type=\"soql\"\n- Helps prevent timeouts with complex queries or large record sizes\n\n**Technical considerations**\n\n- Different Salesforce editions have different limits\n- Values too high may cause timeouts with complex records\n- Values too low may require multiple API calls\n- Standard objects typically support higher limits than custom objects\n\n**Optimization guidance**\n\n- For simple records (few fields): Higher values improve throughput\n- For complex records (many fields): Lower values prevent timeouts\n- For standard objects: 50,000 is usually safe\n- For custom objects: 10,000-25,000 is recommended\n\nIMPORTANT: The Salesforce Bulk API 2.0 has a hard limit of 100 million records\nper job, but practical limits are typically much lower based on record complexity\nand Salesforce instance capacity.\n","minimum":10000},"purgeJobAfterExport":{"type":"boolean","description":"Controls whether Bulk API jobs are automatically deleted after completion.\n\n**Field behavior**\n\nThis field manages job cleanup:\n\n- OPTIONAL: Defaults to false if not specified\n- When true, deletes the Bulk API job after all data is retrieved\n- Only applicable when api=\"bulk\" and type=\"soql\"\n- Has no effect on the actual data retrieval or results\n\n**Implementation impact**\n\nWhen enabled (true):\n- Reduces clutter in the Salesforce Bulk Data Load Jobs UI\n- Prevents accumulation of completed jobs\n- May help stay under job retention limits\n- Makes job details unavailable for later troubleshooting\n\nWhen disabled (false):\n- Preserves job history for troubleshooting\n- Allows reviewing job details in Salesforce\n- May accumulate many jobs over time\n\n**Best practices**\n\n- For production environments: Set to true for cleanliness\n- For testing/development: Set to false for easier debugging\n- For audit-heavy environments: Set to false if job history is needed\n\nIMPORTANT: This setting only affects job metadata cleanup in Salesforce.\nIt has no impact on the actual data retrieved or the success of the export.\n"}}},"soql":{"type":"object","description":"Configuration for SOQL query-based Salesforce exports.\n\n**Field behavior**\n\nThis object contains the SOQL query settings:\n\n- REQUIRED when type=\"soql\"\n- Not used when type=\"distributed\" or for blob exports\n- Controls what data is retrieved from Salesforce\n- Works with both REST API and Bulk API methods\n\n**Implementation requirements**\n\nThe soql object must include a valid query that follows Salesforce SOQL syntax.\nThe query determines:\n- Which objects are accessed\n- Which fields are retrieved\n- What filtering conditions are applied\n- How results are sorted and limited\n","properties":{"query":{"type":"string","description":"The SOQL query that defines what data to retrieve from Salesforce.\n\n**Field behavior**\n\nThis field contains the actual SOQL statement:\n\n- REQUIRED when type=\"soql\"\n- Must follow Salesforce Object Query Language syntax\n- Passed directly to Salesforce API endpoints\n- Can include dynamic values via handlebars\n\n**Query structure elements**\n\nA complete SOQL query typically includes:\n\n**Field Selection**\n```\nSELECT Id, Name, Email, Phone, Account.Name\n```\n- List specific fields to retrieve\n- Include relationship fields using dot notation\n- Use * sparingly (only with specific sObjects that support it)\n\n**Object Selection**\n```\nFROM Contact\n```\n- Specifies the Salesforce object to query\n- Must be a valid API name (not label)\n- Case-sensitive (match Salesforce API names exactly)\n\n**Filter Conditions**\n```\nWHERE LastModifiedDate > {{lastExportDateTime}}\nAND IsActive = true\n```\n- Limits which records are returned\n- Can reference handlebars variables (e.g., for delta exports)\n- Supports standard operators (=, !=, >, <, LIKE, IN, etc.)\n\n**Relationship Queries**\n```\nSELECT Account.Id, (SELECT Id, FirstName FROM Contacts)\nFROM Account\n```\n- Retrieves parent and child records in a single query\n- Helps reduce API calls for related data\n- Supports both lookup and master-detail relationships\n\n**Implementation best practices**\n\n- Select only the fields you need (improves performance)\n- Use WHERE clauses to limit data volume\n- For delta exports, use LastModifiedDate with {{lastExportDateTime}}\n- Use ORDER BY for consistent results across multiple pages\n- Avoid SOQL functions in filters when using Bulk API\n\n**Technical limits**\n\n- Maximum query length: 20,000 characters\n- Maximum relationships traversed: 5 levels\n- Maximum subquery levels: 1 (no nested subqueries)\n- Maximum batch size varies by API (REST: 2,000, Bulk: 10,000+)\n\nIMPORTANT: When using relationship queries, child objects count against\ngovernor limits differently. For bulk processing of many parent-child records,\nconsider separate queries or the oneToMany export setting.\n","maxLength":200000}}},"distributed":{"type":"object","description":"Configuration for real-time Salesforce event-driven exports.\n\n**Field behavior**\n\nThis object defines real-time event listener settings:\n\n- REQUIRED when type=\"distributed\"\n- Not used when type=\"soql\" or for blob exports\n- Creates push-based integration triggered by Salesforce events\n- Implements real-time processing of creates, updates, and deletes\n\n**Implementation context**\n\nDistributed exports work fundamentally differently from SOQL exports:\n- No scheduling or manual execution required\n- Triggered automatically when records change in Salesforce\n- Data flows in real-time as events occur\n- Uses Salesforce's platform events and streaming API\n\n**Technical architecture**\n\nWhen configured, the system:\n1. Creates custom triggers in the connected Salesforce org\n2. Establishes event listeners for the specified objects\n3. Processes events as they occur (create/update/delete operations)\n4. Delivers the changed records to the integration flow\n","properties":{"referencedFields":{"type":"array","description":"Specifies additional fields to retrieve from related objects via relationships.\n\n**Field behavior**\n\nThis field extends the data retrieval beyond the primary object:\n\n- OPTIONAL: If omitted, only direct fields are retrieved\n- Each entry specifies a field on a related object using dot notation\n- Values are included in the exported record data\n- Only works with lookup and master-detail relationships\n\n**Implementation patterns**\n\n**Parent Object Fields**\n```\n[\"Account.Name\", \"Account.Industry\", \"Account.BillingCity\"]\n```\n- Retrieves fields from parent objects\n- Useful for including context from parent records\n- Common for child objects like Contacts, Opportunities\n\n**User/Owner Fields**\n```\n[\"Owner.Email\", \"CreatedBy.Name\", \"LastModifiedBy.Username\"]\n```\n- Retrieves fields from standard user relationship fields\n- Provides attribution information\n- Useful for auditing and notification scenarios\n\n**Custom Relationship Fields**\n```\n[\"Custom_Lookup__r.Field_Name__c\", \"Another_Relation__r.Status__c\"]\n```\n- Works with custom relationship fields\n- Uses __r suffix for the relationship name\n- Can access standard or custom fields on the related object\n\n**Technical considerations**\n\n- Maximum 10 unique referenced relationships per export\n- Each referenced field counts against Salesforce API limits\n- Fields must be accessible to the connected user\n- Performance impact increases with each additional relationship\n\nIMPORTANT: Referenced fields are retrieved via separate API calls,\nwhich can impact performance with large numbers of records or relationships.\nOnly include fields that are actually needed by your integration.\n","items":{"type":"string"}},"disabled":{"type":"boolean","description":"Controls whether this real-time event listener is active.\n\n**Field behavior**\n\nThis field enables/disables event processing:\n\n- OPTIONAL: Defaults to false if not specified\n- When true, prevents the export from processing any events\n- Preserves configuration while temporarily stopping execution\n- Can be toggled without removing the entire export\n\n**Use cases**\n\nThis field is particularly useful for:\n\n- Temporarily pausing real-time integration during maintenance\n- Testing event configuration without processing\n- Creating standby event handlers for disaster recovery\n- Controlling traffic during peak business periods\n\n**Implementation notes**\n\nWhen disabled (true):\n- Events are NOT queued - they are completely ignored\n- No data will flow through this export\n- The Salesforce triggers remain in place but are inactive\n- No impact on Salesforce performance or API limits\n\nIMPORTANT: When disabled, events that occur will NOT be processed retroactively\nwhen re-enabled. Consider using a delta export for catching up on missed changes\nafter extended disabled periods.\n"},"qualifier":{"type":"string","description":"A filter expression that determines which Salesforce events are processed.\n\n**Field behavior**\n\nThis field provides server-side filtering:\n\n- OPTIONAL: If omitted, all events for the object are processed\n- Uses Salesforce formula syntax for filtering\n- Evaluated before events are sent to the integration platform\n- Can reference any field on the triggering record\n\n**Implementation patterns**\n\n**Simple Field Comparisons**\n```\n\"Status__c = 'Approved'\"\n```\n- Processes events only when specific field values match\n- Most efficient filtering approach\n- Can use =, !=, >, <, >=, <= operators\n\n**Logical Conditions**\n```\n\"Amount > 1000 AND Status__c = 'New'\"\n```\n- Combines multiple conditions with AND, OR operators\n- Can use parentheses for complex grouping\n- Allows precise control over which events trigger the integration\n\n**Formula Functions**\n```\n\"CONTAINS(Description, 'Priority') OR ISCHANGED(Status__c)\"\n```\n- Uses Salesforce formula functions\n- ISCHANGED detects specific field modifications\n- ISNEW, ISDELETED detect record lifecycle events\n\n**Performance impact**\n\nThe qualifier is evaluated in Salesforce before sending events:\n- Reduces network traffic and processing\n- Lowers integration platform load\n- More efficient than filtering in a subsequent flow step\n- No additional API calls required\n\nIMPORTANT: The qualifier is evaluated using the Salesforce formula engine.\nUse valid Salesforce formula syntax and reference only fields that exist\non the primary object being monitored.\n"},"batchSize":{"type":"integer","description":"Controls how many records are processed together in each real-time batch.\n\n**Field behavior**\n\nThis field affects event processing efficiency:\n\n- OPTIONAL: Uses system default if not specified\n- Valid range: 4 to 200 records per batch\n- Affects how events are grouped before processing\n- Balance between latency and throughput\n\n**Performance considerations**\n\n**Smaller Batch Sizes (4-20)**\n```\n\"batchSize\": 10\n```\n- Lower latency - events processed more immediately\n- More overhead for small numbers of records\n- Better for time-sensitive operations\n- More resilient for complex record processing\n\n**Larger Batch Sizes (50-200)**\n```\n\"batchSize\": 100\n                    - Higher throughput - better efficiency for many records\n- Slight increase in processing delay\n- Better for high-volume operations\n- More efficient use of API calls and resources\n\n**Implementation guidance**\n\nChoose based on your volume and timing requirements:\n\n- For high-volume objects (many changes per minute): Use larger batches\n- For time-sensitive operations: Use smaller batches\n- For complex processing logic: Use smaller batches\n- For efficiency and throughput: Use larger batches\n\nIMPORTANT: The batch size doesn't limit how many records can be processed\nin total, only how they're grouped for processing. All events will eventually\nbe processed regardless of batch size.\n","minimum":4,"maximum":200},"skipExportFieldId":{"type":"string","description":"Specifies a boolean field that prevents integration loops in bidirectional sync.\n\n**Field behavior**\n\nThis field provides a loop prevention mechanism:\n\n- OPTIONAL: If omitted, no loop prevention is applied\n- Must reference a valid boolean/checkbox field on the object\n- Field must be updateable via the Salesforce API\n- System automatically manages the field's value\n\n**Implementation mechanism**\n\nThe loop prevention works as follows:\n\n1. When your integration updates a record in Salesforce\n2. The system temporarily sets this field to true\n3. The update triggers Salesforce's normal event system\n4. But events where this field is true are ignored\n5. The system automatically clears the field afterward\n\n**Use cases**\n\nThis field is critical for:\n\n- Bidirectional synchronization scenarios\n- Preventing infinite update loops\n- Implementing changes that flow both ways\n- Distinguishing between user changes and integration changes\n\n**Field requirements**\n\nThe field you specify must be:\n- A checkbox (Boolean) field in Salesforce\n- Created specifically for integration purposes\n- Not used by other business processes\n- Updateable by the integration user\n\nIMPORTANT: For bidirectional sync scenarios, this field is required.\nWithout it, updates from your integration would trigger events that\ncould create infinite loops between systems.\n"},"relatedLists":{"type":"array","description":"Configuration for retrieving child records related to the primary object.\n\n**Field behavior**\n\nThis field enables parent-child data synchronization:\n\n- OPTIONAL: If omitted, only the primary record is processed\n- Each array entry configures one related list/child object\n- Child records are included with their parent in the payload\n- Automatically retrieves child records when parent changes\n\n**Implementation context**\n\nThis feature allows you to:\n- Synchronize complete object hierarchies in real-time\n- Include child records when a parent record changes\n- Process parent-child data together in a single flow\n- Maintain relationships between objects across systems\n\n**Technical impact**\n\n- Each related list requires additional Salesforce API calls\n- Performance impact increases with each related list\n- Data volume can increase significantly with many children\n- Parent-child structures may require special handling in flows\n","items":{"type":"object","description":"Configuration for a single related list (child object) to include.\n\nEach object in this array defines how to retrieve one type of\nchild records related to the primary object. Multiple related lists\ncan be configured to retrieve different types of children.\n","properties":{"referencedFields":{"type":"array","description":"Specifies which fields to retrieve from the child records.\n\n**Field behavior**\n\nThis field selects child record fields:\n\n- REQUIRED for each related list configuration\n- Must contain valid API field names for the child object\n- Only listed fields will be retrieved from child records\n- Empty array will retrieve only Id field\n\n**Implementation guidance**\n\n- Include only fields needed by your integration\n- Always include key identifier fields\n- Consider relationship fields if needed\n- Balance between completeness and performance\n\nIMPORTANT: Each field increases data volume and processing time.\nOnly include fields that your integration actually needs to process.\n","items":{"type":"string"}},"parentField":{"type":"string","description":"Specifies the field on the child object that relates back to the parent.\n\n**Field behavior**\n\nThis field identifies the relationship:\n\n- REQUIRED for each related list configuration\n- Must be a lookup or master-detail field on the child object\n- References the parent object being exported\n- Used to construct the relationship query\n\n**Relationship field patterns**\n\n**Standard Relationships**\n```\n\"parentField\": \"AccountId\"\n```\n- For standard parent-child relationships\n- Field name typically ends with \"Id\"\n- References standard objects\n\n**Custom Relationships**\n```\n\"parentField\": \"Parent_Object__c\"\n```\n- For custom parent-child relationships\n- Field name typically ends with \"__c\"\n- References custom objects\n\n**Technical details**\n\nThe system uses this field to construct a query like:\n```\nSELECT [referencedFields] FROM [sObjectType]\nWHERE [parentField] = [parent record Id]\n```\n\nIMPORTANT: This must be the exact API name of the field on the child\nobject that creates the relationship to the parent, not the relationship\nname itself.\n"},"sObjectType":{"type":"string","description":"Specifies the API name of the child object to retrieve.\n\n**Field behavior**\n\nThis field identifies the child object type:\n\n- REQUIRED for each related list configuration\n- Must be a valid Salesforce API object name\n- Case-sensitive (match Salesforce naming exactly)\n- Can be standard or custom object\n\n**Object name patterns**\n\n**Standard Objects**\n```\n\"sObjectType\": \"Contact\"\n```\n- Standard Salesforce objects\n- No namespace or suffix\n- First letter capitalized\n\n**Custom Objects**\n```\n\"sObjectType\": \"Custom_Object__c\"\n```\n- Custom Salesforce objects\n- API name with \"__c\" suffix\n- Case-sensitive, including underscores\n\n**Relationship compatibility**\n\nThe sObjectType must:\n- Have a relationship field to the parent object\n- Be accessible to the connected user\n- Support standard SOQL queries\n\nIMPORTANT: Use the exact API name of the object, not its label.\nThis value is case-sensitive and must match Salesforce's naming exactly.\n"},"filter":{"type":"string","description":"Optional SOQL WHERE clause to filter which child records are included.\n\n**Field behavior**\n\nThis field adds filtering to child record retrieval:\n\n- OPTIONAL: If omitted, all related child records are included\n- Contains only the condition expression (without \"WHERE\" keyword)\n- Uses standard SOQL syntax for conditions\n- Applied in addition to the parent relationship filter\n\n**Filtering patterns**\n\n**Simple Condition**\n```\n\"filter\": \"IsActive = true\"\n```\n- Basic field comparison\n- Only active related records are included\n\n**Multiple Conditions**\n```\n\"filter\": \"Status__c = 'Open' AND Priority = 'High'\"\n```\n- Combined conditions with logical operators\n- Only records matching all conditions are included\n\n**Complex Filtering**\n```\n\"filter\": \"CreatedDate > LAST_N_DAYS:30 OR IsClosed = false\"\n```\n- Can use Salesforce date literals and functions\n- Can mix different types of conditions\n\n**Technical details**\n\nThe system appends this to the automatically generated relationship query:\n```\nSELECT [fields] FROM [sObjectType]\nWHERE [parentField] = [parent ID] AND ([filter])\n```\n\nIMPORTANT: Do not include the \"WHERE\" keyword in this field.\nOnly include the condition expression itself, as it will be combined\nwith the parent relationship condition automatically.\n"},"orderBy":{"type":"string","description":"Optional SOQL ORDER BY clause to sort the child records.\n\n**Field behavior**\n\nThis field controls child record ordering:\n\n- OPTIONAL: If omitted, order is determined by Salesforce\n- Contains only field and direction (without \"ORDER BY\" keywords)\n- Uses standard SOQL syntax for sorting\n- Applied to the child records query\n\n**Ordering patterns**\n\n**Single Field Ascending (Default)**\n```\n\"orderBy\": \"Name\"\n```\n- Sorts by a single field in ascending order\n- ASC is implied if not specified\n\n**Single Field Descending**\n```\n\"orderBy\": \"CreatedDate DESC\"\n```\n- Sorts by a single field in descending order\n- Must explicitly specify DESC\n\n**Multiple Fields**\n```\n\"orderBy\": \"Priority DESC, CreatedDate ASC\"\n```\n- Sorts by multiple fields in specified directions\n- Comma-separated list of fields with optional directions\n\n**Technical details**\n\nThe system appends this to the automatically generated relationship query:\n```\nSELECT [fields] FROM [sObjectType]\nWHERE [parentField] = [parent ID]\nORDER BY [orderBy]\n```\n\nIMPORTANT: Do not include the \"ORDER BY\" keywords in this field.\nOnly include the field names and sort directions, as they will be\nadded to the query with the proper syntax automatically.\n"}}}}}}}},"AS2":{"type":"object","description":"Configuration for AS2 (Applicability Statement 2) exports and listeners.\n\n**What is AS2?**\n\nApplicability Statement 2 (AS2) is a widely adopted protocol for securely and reliably transmitting\nEDI and other data types over the internet using HTTP/S, S/MIME encryption, and digital signatures.\nAS2 provides:\n\n- **Message integrity** through digital signature validation\n- **Confidentiality** via encryption with X.509 certificates\n- **Non-repudiation** via Message Disposition Notifications (MDNs)\n\n**As2 export configuration**\n\nIMPORTANT: When the _connectionId field points to a connection where the type is as2,\nthis object MUST be populated for the export to function properly. This is a required configuration\nfor all AS2 based exports, as determined by the connection associated with the export.\n\n**As2 listener functionality**\n\nAn AS2 listener is a flow step in Celigo designed to receive incoming AS2 transmissions\nand deliver them into a defined integration flow. It acts as the \"source\" of a flow—similar to\nhow a webhook listener works—except it specifically handles AS2 protocol requirements, including\ndecryption, signature verification, and MDN generation.\n\nUnlike periodic polling or scheduled exports, an AS2 listener functions in near real-time—when\na trading partner pushes an AS2 message, Celigo's listener step processes it instantly,\ngenerating an MDN in response to acknowledge receipt. This ensures low-latency, event-driven\nprocessing where each inbound AS2 transmission triggers the integration flow automatically.\n","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"Reference to a TradingPartnerConnector document.\n\n**Trading partner connector overview**\n\nA Trading Partner Connector in Celigo's integrator.io is a prebuilt, partner-specific integration\ntemplate that streamlines the setup and management of Electronic Data Interchange (EDI) transactions\nwith a designated trading partner. It encapsulates all requisite configurations:\n\n- Communication protocol (e.g., AS2, FTP/SFTP, VAN)\n- Document schemas (such as ANSI X12 or EDIFACT)\n- Mappings\n- Validation rules\n- Endpoint details\n\n**Benefits**\n\nBy referencing a Trading Partner Connector through this field, organizations:\n\n- Reduce manual setup time\n- Ensure compliance with specific partner requirements\n- Take advantage of Celigo's out-of-the-box EDI capabilities\n- Process transactions reliably and securely\n- Onboard new partners rapidly without building flows from scratch\n\nThis field is crucial for AS2 configurations as it links the export to all partner-specific\nsettings required for successful AS2 communication.\n"},"blob":{"type":"boolean","description":"- **Behavior**: Retrieves raw files without parsing them into structured data records.  Should only be used when the contents of the file will not be used in subsequent steps.\n- **UI Appearance**: \"Transfer\" flow step\n- **Required Config**: Configuration only available on AS2 and VAN exports (as2.blob = true)\n- **Use Case**: Raw file transfers for binary files or when parsing is handled downstream\n- **Important Note**: Use this when you want to handle the file as a raw blob without automatic parsing\n"}},"required":[]},"DynamoDB":{"type":"object","description":"Configuration object for Amazon DynamoDB data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a DynamoDB connection\nand must not be included for other connection types. It defines how data is extracted\nfrom DynamoDB tables, using query operations against NoSQL data structures.\n\n**Implementation requirements**\n\nThe DynamoDB object has the following requirements:\n\n- For basic exports:\n  Required fields: region, method, tableName, expressionAttributeNames, expressionAttributeValues, keyConditionExpression\n  Optional fields: filterExpression, projectionExpression\n\n- For Once exports (when export.type=\"once\"):\n  Additional required field: onceExportPartitionKey\n  Optional field: onceExportSortKey (for composite keys)\n","properties":{"region":{"type":"string","enum":["us-east-1","us-east-2","us-west-1","us-west-2","af-south-1","ap-east-1","ap-south-1","ap-northeast-1","ap-northeast-2","ap-northeast-3","ap-southeast-1","ap-southeast-2","ca-central-1","eu-central-1","eu-west-1","eu-west-2","eu-west-3","eu-south-1","eu-north-1","me-south-1","sa-east-1"],"description":"Specifies the AWS region where the DynamoDB table is located.\n\n**Field behavior**\n\nThis field determines where to connect to DynamoDB:\n\n- REQUIRED for all DynamoDB exports\n- Must match the region where your DynamoDB table is deployed\n- Select the same AWS region used in your database configuration\n- Ensures the integration can access your table\n","default":"us-east-1"},"method":{"type":"string","enum":["query"],"description":"Defines the DynamoDB operation method used to retrieve data.\n\n**Field behavior**\n\n- REQUIRED for all DynamoDB exports\n- Currently only supports \"query\" operations\n- Always set this value to \"query\"\n- Additional methods may be supported in future versions\n"},"tableName":{"type":"string","description":"Specifies the DynamoDB table from which to retrieve data.\n\n**Field behavior**\n\nThis field identifies the data source:\n\n- REQUIRED for all DynamoDB exports\n- Must be an exact match to an existing table name\n- Case-sensitive as per AWS naming conventions\n- Cannot be changed without recreating the export\n\n**Implementation patterns**\n\n**Standard Table Names**\n```\n\"tableName\": \"Customers\"\n```\n"},"keyConditionExpression":{"type":"string","description":"Defines the search criteria to determine which items to retrieve from DynamoDB.\n\n**Field behavior**\n\n- REQUIRED when method=\"query\"\n- Must include a condition on the partition key\n- Can optionally include conditions on the sort key\n- Uses placeholders defined in expressionAttributeNames and expressionAttributeValues\n\n**Common patterns**\n\n```\n\"#pk = :pkValue\"                                  // Partition key only\n\"#pk = :pkValue AND #sk = :skValue\"               // Exact match on partition and sort key\n\"#pk = :pkValue AND #sk BETWEEN :start AND :end\"  // Range query on sort key\n\"#pk = :pkValue AND begins_with(#sk, :prefix)\"    // Prefix match on sort key\n```\n\nPlaceholders with '#' reference attribute names, while ':' reference values.\n"},"filterExpression":{"type":"string","description":"Filters the results from a query based on non-key attributes.\n\n**Field behavior**\n\n- OPTIONAL: If omitted, all items matching the key condition are returned\n- Applied after the key condition but before returning results\n- Can reference any non-key attributes to further refine results\n- Uses placeholders defined in expressionAttributeNames and expressionAttributeValues\n\n**Examples**\n\n```\n\"#status = :active\"\n\"#status = :active AND #price > :minPrice\"\n\"contains(#tags, :tagValue)\"\n```\n\nRefer to the DynamoDB documentation for the complete list of valid operators and syntax.\n"},"projectionExpression":{"type":"array","items":{"type":"string"},"description":"Specifies which fields to return from each item in the results.\n\n**Field behavior**\n\n- OPTIONAL: If omitted, all fields are returned\n- Each array element represents a field to include\n- References attribute names defined in expressionAttributeNames\n- Reduces data transfer by returning only needed fields\n\n**Examples**\n\n```\n[\"#id\", \"#name\", \"#email\"]               // Basic fields\n[\"#id\", \"#profile.#firstName\"]           // Nested fields\n[\"#id\", \"#items[0]\", \"#items[1]\"]        // List elements\n```\n\nRefer to the DynamoDB documentation for more details on projection syntax.\n"},"expressionAttributeNames":{"type":"string","description":"Defines placeholders for attribute names used in expressions.\n\n**Field behavior**\n\n- REQUIRED when using expressions that reference attribute names\n- Must be a valid JSON string mapping placeholders to actual attribute names\n- Each placeholder must begin with a pound sign (#) followed by alphanumeric characters\n- Used in keyConditionExpression, filterExpression, and projectionExpression\n\n**Example**\n\n```\n\"{\\\"#pk\\\": \\\"customerId\\\", \\\"#status\\\": \\\"status\\\"}\"\n```\n\nThis maps the placeholder #pk to the actual attribute name \"customerId\" and #status to \"status\".\n\nRefer to the DynamoDB documentation for more details.\n"},"expressionAttributeValues":{"type":"string","description":"Defines placeholder values used in expressions for comparison.\n\n**Field behavior**\n\n- REQUIRED when using expressions that compare attribute values\n- Must be a valid JSON string mapping placeholders to actual values\n- Each placeholder must begin with a colon (:) followed by alphanumeric characters\n- Used in keyConditionExpression and filterExpression\n- Can contain static values or dynamic values with handlebars syntax\n\n**Example**\n\n```\n\"{\\\":customerId\\\": \\\"12345\\\", \\\":status\\\": \\\"ACTIVE\\\"}\"\n```\n\nThis maps the placeholder :customerId to the value \"12345\" and :status to \"ACTIVE\".\n\nRefer to the DynamoDB documentation for more details.\n"},"onceExportPartitionKey":{"type":"string","description":"Specifies the partition key attribute for identifying items in once exports.\n\n**Field behavior**\n\n- REQUIRED when export.type=\"once\"\n- Must specify the primary key that uniquely identifies each item in the table\n- Celigo uses this to track which items have been processed\n- After successful export, Celigo updates a tracking field in the database\n\nThis is needed for once exports to prevent duplicate processing of the same items\nin subsequent runs by marking them as processed.\n\nRefer to the DynamoDB documentation for more details on partition keys.\n"},"onceExportSortKey":{"type":"string","description":"Specifies the sort key attribute for identifying items in composite key tables.\n\n**Field behavior**\n\n- OPTIONAL: Only needed for tables with composite primary keys\n- Used together with onceExportPartitionKey for tables where items are identified by both keys\n- Celigo uses both keys to uniquely identify items that have been processed\n- For tables with only a partition key (simple primary key), leave this empty\n\nThis is only required if your DynamoDB table uses a composite primary key\n(partition key + sort key) to uniquely identify items.\n\nRefer to the DynamoDB documentation for more details on sort keys.\n"}}},"FTP":{"type":"object","description":"Configuration object for FTP/SFTP connection settings in export integrations.\n\nThis object is REQUIRED when the _connectionId field references an FTP/SFTP connection\nand must not be included for other connection types. It defines how to locate and retrieve\nfiles from FTP, FTPS, or SFTP servers.\n\nThe FTP export object has the following requirements:\n\n- Required fields: directoryPath\n- Optional fields: fileNameStartsWith, fileNameEndsWith, backupDirectoryPath, _tpConnectorId\n\n**Purpose**\n\nThis configuration specifies:\n- Which directory to retrieve files from\n- How to filter files by name patterns\n- Where to move files after retrieval (optional)\n- Any trading partner-specific connection settings\n","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"References a Trading Partner Connector for standardized B2B integrations.\n\n**Field behavior**\n\nThis field links to pre-configured trading partner settings:\n\n- OPTIONAL: If omitted, uses only the FTP connection details\n- References a Celigo Trading Partner Connector by _id\n- When specified, inherits partner-specific configurations\n"},"directoryPath":{"type":"string","description":"Directory on the FTP/SFTP server to retrieve files from.\n\n- REQUIRED for all FTP exports\n- Can be relative to login directory or absolute path\n- Supports handlebars templates (e.g., `archive/{{date 'YYYY-MM-DD'}}`)\n- Use forward slashes (/) regardless of server OS\n- Path is case-sensitive on UNIX/Linux servers\n\nIMPORTANT: The FTP user must have read permissions on this directory.\n"},"fileNameStartsWith":{"type":"string","description":"Optional prefix filter for filenames.\n\n- Filters files based on starting characters\n- Case-sensitive on most FTP servers\n- Can use static text or handlebars templates\n- Examples:\n  - `\"ORDER_\"` - matches ORDER_123.csv but not order_123.csv\n  - `\"INV_{{date 'YYYYMMDD'}}\"` - matches current date's invoices\n\nWhen used with fileNameEndsWith, files must match both criteria.\n"},"fileNameEndsWith":{"type":"string","description":"Optional suffix filter for filenames.\n\n- Commonly used to filter by file extension\n- Case-sensitive on most FTP servers\n- Examples:\n  - `\".csv\"` - retrieves only CSV files\n  - `\"_FINAL.xml\"` - retrieves only XML files with _FINAL suffix\n  - `\"_READY\"` - retrieves files with status indicator\n\nWhen used with fileNameStartsWith, files must match both criteria.\n"},"backupDirectoryPath":{"type":"string","description":"Optional directory where files are moved before deletion.\n\n- If omitted, files are deleted from the original location after successful export\n- Must be on the same FTP/SFTP server\n- Supports static paths or handlebars templates\n- Examples:\n  - `\"processed\"` - simple archive folder\n  - `\"archive/{{date 'YYYY/MM/DD'}}\"` - date-based hierarchy\n\nIMPORTANT: Celigo automatically deletes files from the source directory after\nsuccessful export. The backup directory is for users who want to maintain their\nown independent backup of exported files. Celigo also maintains its own backup\nof processed files for a set period of time.\n"}}},"JDBC":{"type":"object","description":"Configuration object for JDBC (Java Database Connectivity) data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a JDBC database connection\nand must not be included for other connection types. It defines how data is extracted\nfrom relational databases using SQL queries.\n\n**Jdbc export capabilities**\n- Execute custom SQL SELECT statements\n- Support for joins, aggregations, and functions\n- Flexible data retrieval from any accessible tables or views\n- Compatible with all major database systems\n**Critical:** WHAT BELONGS IN THIS OBJECT\n- `query` - **ALWAYS REQUIRED** - The SQL SELECT statement\n- `once` - **REQUIRED** when the export's Object Type is `\"once\"` (set _include_once: true)\n- **DO NOT** put `delta` inside this object - delta is handled via the query\n\n**Delta exports (type: \"delta\")**\nFor delta/incremental exports, do NOT populate a `delta` object inside `jdbc`.\nInstead, use `{{lastExportDateTime}}` or `{{currentExportDateTime}}` directly in the query:\n```json\n{\n  \"type\": \"delta\",\n  \"jdbc\": {\n    \"query\": \"SELECT * FROM customers WHERE updatedAt > {{lastExportDateTime}}\"\n  }\n}\n```\n\n**Once exports (type: \"once\")**\nFor once exports (mark records as processed), populate `jdbc.once.query`:\n```json\n{\n  \"type\": \"once\",\n  \"jdbc\": {\n    \"query\": \"SELECT * FROM customers WHERE exported = false\",\n    \"once\": {\n      \"query\": \"UPDATE customers SET exported = true WHERE id = {{record.id}}\"\n    }\n  }\n}\n```\n\n**Standard exports (type: null or not specified)**\nJust provide the query:\n```json\n{\n  \"jdbc\": {\n    \"query\": \"SELECT * FROM customers WHERE status = 'ACTIVE'\"\n  }\n}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - SQL SELECT query for retrieving data from the database.\n\nThis field contains the SQL SELECT statement that will be executed to fetch data\nfrom the database. The query can range from simple table selections to complex\njoins and aggregations.\n\nExamples:\n- Basic: `SELECT id, name, email FROM customers WHERE status = 'ACTIVE'`\n- Join: `SELECT o.id, c.name, o.amount FROM orders o JOIN customers c ON o.customer_id = c.id`\n- Aggregate: `SELECT category, COUNT(*) as count FROM orders GROUP BY category`\n- Parameterized: `SELECT * FROM orders WHERE customer_id = {{record.customer_id}}`\n\n**For delta exports (when top-level type is \"delta\")**\nInclude `{{lastExportDateTime}}` or `{{currentExportDateTime}}` in the WHERE clause:\n- `SELECT * FROM customers WHERE updatedAt > {{lastExportDateTime}}`\n- `SELECT * FROM orders WHERE modified_date >= {{lastExportDateTime}}`\n"},"once":{"type":"object","description":"**CRITICAL: REQUIRED when the export's Object Type is \"once\".**\n\nIf Object Type is \"once\", you MUST set _include_once to true (or include this object).\n\nThis object has ONLY ONE property: \"query\" (a SQL UPDATE string).\nDO NOT create any other properties like \"update\", \"table\", \"set\", \"where\", etc.\n\nCORRECT format:\n```json\n{\"query\": \"UPDATE customers SET exported=true WHERE id={{record.id}}\"}\n```\n\nWRONG format (DO NOT DO THIS):\n```json\n{\"update\": {\"table\": \"customers\", \"set\": {...}}}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - A SQL UPDATE statement string to mark records as processed.\n\nThis is a plain SQL UPDATE query string, NOT a structured object.\n\nThe query runs AFTER each record is successfully exported, setting a flag\nto indicate the record has been processed.\n\nFormat: \"UPDATE <table> SET <column>=<value> WHERE <id_column>={{record.<id_field>}}\"\n\nExample: \"UPDATE customers SET exported=true WHERE id={{record.id}}\"\n\nThe {{record.id}} placeholder is replaced with the actual record ID from each exported row.\n"}}}}},"MongoDB":{"type":"object","description":"Configuration object for MongoDB data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a MongoDB connection\nand must not be included for other connection types. It defines how documents are\nretrieved from MongoDB collections for processing in integrations.\n\nMongoDB exports currently support the following operational modes:\n- Retrieves documents from specified collections\n- Filters documents based on query criteria\n- Selects specific fields with projections\n- Provides NoSQL flexibility with JSON query syntax\n","properties":{"method":{"type":"string","enum":["find"],"description":"Specifies the MongoDB operation to perform when retrieving data.\n\n**Field behavior**\n\nThis field defines the query approach:\n\n- REQUIRED for all MongoDB exports\n- Currently only supports \"find\" operations\n- Determines how other parameters are interpreted\n- Corresponds to MongoDB's db.collection.find() method\n- Future versions may support additional methods\n\n**Query method types**\n\n**Find Method**\n```\n\"method\": \"find\"\n```\n\n- **Behavior**: Retrieves documents from a collection based on filter criteria\n- **MongoDB Equivalent**: db.collection.find(filter, projection)\n- **Required Parameters**: collection\n- **Optional Parameters**: filter, projection\n- **Use Cases**: Standard document retrieval, filtered queries, field selection\n\n**Technical considerations**\n\nThe method selection influences:\n- What other fields must be provided\n- How the query will be executed against MongoDB\n- What indexing strategies should be applied\n- Performance characteristics of the operation\n\nIMPORTANT: While only \"find\" is currently supported, the schema is designed\nfor future expansion to include other MongoDB operations like \"aggregate\"\nfor more complex data transformations and aggregations.\n"},"collection":{"type":"string","description":"Specifies the MongoDB collection to query for documents.\n\n**Field behavior**\n\nThis field identifies the data source:\n\n- REQUIRED for all MongoDB exports\n- Must reference a valid collection in the MongoDB database\n- Case-sensitive according to MongoDB collection naming\n- The primary container for documents to be retrieved\n"},"filter":{"type":"string","description":"Defines query criteria for selecting documents from the collection.\n\n**Field behavior**\n\nThis field narrows document selection:\n\n- OPTIONAL: If omitted, all documents in the collection are returned\n- Contains a MongoDB query document as a JSON string\n- Supports all standard MongoDB query operators\n- Provides precise control over which documents are retrieved\n\n**Query patterns**\n\n**Simple Equality Query**\n```\n\"filter\": \"{\"status\": \"active\"}\"\n```\n\n- **Behavior**: Returns only documents where status equals \"active\"\n- **MongoDB Equivalent**: db.collection.find({\"status\": \"active\"})\n- **Matching Documents**: {\"_id\": 1, \"status\": \"active\", \"name\": \"Example\"}\n- **Use Cases**: Status filtering, category selection, type filtering\n\n**Comparison Operator Query**\n```\n\"filter\": \"{\"createdDate\": {\"$gt\": \"2023-01-01T00:00:00Z\"}}\"\n```\n\n- **Behavior**: Returns documents created after January 1, 2023\n- **MongoDB Equivalent**: db.collection.find({\"createdDate\": {\"$gt\": \"2023-01-01T00:00:00Z\"}})\n- **Operators**: $eq, $gt, $gte, $lt, $lte, $ne, $in, $nin\n- **Use Cases**: Date ranges, numeric thresholds, incremental processing\n\n**Logical Operator Query**\n```\n\"filter\": \"{\"$or\": [{\"status\": \"pending\"}, {\"status\": \"processing\"}]}\"\n```\n\n- **Behavior**: Returns documents with either pending or processing status\n- **MongoDB Equivalent**: db.collection.find({\"$or\": [{\"status\": \"pending\"}, {\"status\": \"processing\"}]})\n- **Operators**: $and, $or, $nor, $not\n- **Use Cases**: Multiple conditions, alternative criteria, complex filtering\n\n**Nested Document Query**\n```\n\"filter\": \"{\"address.country\": \"USA\"}\"\n```\n\n- **Behavior**: Returns documents where the nested country field equals \"USA\"\n- **MongoDB Equivalent**: db.collection.find({\"address.country\": \"USA\"})\n- **Dot Notation**: Accesses nested document fields\n- **Use Cases**: Nested data filtering, object property matching\n\n**Handlebars Template Query**\n```\n\"filter\": \"{\"customerId\": \"{{record.customer_id}}\", \"status\": \"{{record.status}}\"}\"\n```\n\n- **Behavior**: Dynamically filters based on record field values\n- **MongoDB Equivalent**: db.collection.find({\"customerId\": \"123\", \"status\": \"active\"})\n- **Template Variables**: Values replaced at runtime with actual record data\n- **Use Cases**: Dynamic filtering, context-aware queries, relational lookups\n\n**Incremental Processing Query**\n```\n\"filter\": \"{\"lastModified\": {\"$gt\": \"{{lastRun}}\"}}\"\n```\n\n- **Behavior**: Returns only documents modified since last execution\n- **MongoDB Equivalent**: db.collection.find({\"lastModified\": {\"$gt\": \"2023-06-15T10:30:00Z\"}})\n- **System Variables**: {{lastRun}} replaced with timestamp of previous execution\n- **Use Cases**: Change data capture, delta synchronization, incremental updates\n"},"projection":{"type":"string","description":"Controls which fields are included or excluded from returned documents.\n\n**Field behavior**\n\nThis field optimizes data retrieval:\n\n- OPTIONAL: If omitted, all fields are returned\n- Contains a MongoDB projection document as a JSON string\n- Can include fields (1) or exclude fields (0), but not both (except _id)\n- Helps minimize data transfer by selecting only needed fields\n\n**Projection patterns**\n\n**Field Inclusion Projection**\n```\n\"projection\": \"{\"name\": 1, \"email\": 1, \"_id\": 0}\"\n```\n\n- **Behavior**: Returns only name and email fields, excludes _id\n- **MongoDB Equivalent**: db.collection.find({}, {\"name\": 1, \"email\": 1, \"_id\": 0})\n- **Result Format**: {\"name\": \"Example\", \"email\": \"user@example.com\"}\n- **Use Cases**: Specific field selection, minimizing payload size\n\n**Field Exclusion Projection**\n```\n\"projection\": \"{\"password\": 0, \"internal_notes\": 0}\"\n```\n\n- **Behavior**: Returns all fields except password and internal_notes\n- **MongoDB Equivalent**: db.collection.find({}, {\"password\": 0, \"internal_notes\": 0})\n- **Result Impact**: Removes sensitive or unnecessary fields\n- **Use Cases**: Security filtering, removing large fields, data protection\n\n**Nested Field Projection**\n```\n\"projection\": \"{\"profile.firstName\": 1, \"profile.lastName\": 1, \"orders\": 1, \"_id\": 0}\"\n```\n\n- **Behavior**: Returns only specific nested fields and the orders array\n- **MongoDB Equivalent**: db.collection.find({}, {\"profile.firstName\": 1, \"profile.lastName\": 1, \"orders\": 1, \"_id\": 0})\n- **Dot Notation**: Accesses specific nested document fields\n- **Use Cases**: Partial nested document selection, specific array inclusion\n\n**Technical considerations**\n\n- Maximum size: 128KB\n- Must be a valid JSON string representing a MongoDB projection\n- Cannot mix inclusion and exclusion modes (except _id field)\n- _id field is included by default unless explicitly excluded\n- Projection does not affect which documents are returned, only their fields\n\nIMPORTANT: When working with nested documents or arrays, be aware that including\na specific field path does not automatically include parent documents or arrays.\nFor example, including \"addresses.zipcode\" will only return that specific field,\nnot the entire addresses array or documents within it.\n"}}},"NetSuite":{"type":"object","description":"Configuration object for NetSuite data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a NetSuite connection\nand must not be included for other connection types. It defines how data is extracted\nfrom NetSuite, including saved searches, RESTlets, and distributed/SuiteApp exports.\n\n**Netsuite export modes**\n\nNetSuite exports support several operating modes:\n\n1. **Saved Search Exports** - Uses NetSuite saved searches to retrieve data\n2. **RESTlet Exports** - Uses custom RESTlet scripts for data retrieval\n3. **Distributed Exports** - Uses SuiteApp for real-time or batch processing\n4. **Blob Exports** - Retrieves files from the NetSuite file cabinet and transfers them WITHOUT parsing them into records (raw binary transfer)\n5. **File Exports** - Retrieves files from the NetSuite file cabinet and PARSES them into records (CSV, XML, JSON, etc.)\n\n**Critical:** Blob vs File Export Configuration\n\nThe export `type` field at the top level determines whether file content is parsed:\n\n- **For Blob Exports (no parsing)**: Set the export's `type: \"blob\"` AND configure `netsuite.blob`\n- **For File Exports (with parsing)**: Leave the export's `type` as null/undefined AND configure `netsuite.file`\n\nDo NOT set `type: \"blob\"` when you want file content parsed into records. The \"blob\" type is specifically for raw file transfers without any parsing.\n\n**Implementation requirements**\n\n- For saved search exports: Configure the `searches` or `type` properties\n- For RESTlet exports: Configure the `restlet` property with script details\n- For distributed exports: Configure the `distributed` property\n- For blob exports (no parsing): Set export `type: \"blob\"` and configure `netsuite.blob`\n- For file exports (with parsing): Leave export `type` null and configure `netsuite.file`\n","properties":{"type":{"type":"string","enum":["search","basicSearch","metadata","selectoption","restlet","getList","getServerTime","distributed","file"],"description":"Specifies the NetSuite export operation type. This determines how data is retrieved from NetSuite.\n\n**Critical:** File exports vs Blob exports\n\n- **File exports (with parsing)**: Set netsuite.type to \"file\" and configure netsuite.file.folderInternalId\n- **Blob exports (raw transfer, no parsing)**: Leave netsuite.type BLANK/null, set the export's top-level type to \"blob\", and configure netsuite.internalId\n\nDo NOT set netsuite.type to \"file\" for blob exports. For blob exports, this property should be omitted or null.\n\n**Recommended types**\n\n- **For Lookups (isLookup: true)**:\n    - **PREFER \"restlet\"**: This allows you to use `suiteapp2.0` saved searches with dynamic inputs easily.\n    - **AVOID \"search\"**: Standard search type is often limited for dynamic lookups.\n\n**Valid values**\n- \"search\" - Use a saved search to retrieve records\n- \"basicSearch\" - Use a basic search query\n- \"metadata\" - Retrieve record metadata\n- \"selectoption\" - Retrieve select options for a field\n- \"restlet\" - Use a RESTlet for custom data retrieval\n- \"getList\" - Retrieve a list of records by internal IDs\n- \"getServerTime\" - Get the NetSuite server time\n- \"distributed\" - Use distributed/SuiteApp for real-time exports\n- \"file\" - Export files from NetSuite file cabinet WITH parsing into records\n- null/omitted - For blob exports or other export types\n\n**Implementation guidance**\n- For file exports WITH parsing: Set netsuite.type to \"file\" and configure netsuite.file.folderInternalId\n- For blob exports (no parsing): Leave netsuite.type blank, set export type to \"blob\", configure netsuite.internalId\n- For saved search exports: Set type to \"search\" and configure netsuite.searches\n- For RESTlet exports: Set type to \"restlet\" and configure netsuite.restlet\n- For distributed/real-time exports: Set type to \"distributed\" and configure netsuite.distributed\n\n**Examples**\n- \"file\" - For file cabinet exports with parsing\n- \"search\" - For saved search exports\n- null - For blob exports (raw file transfer without parsing)"},"searches":{"type":"array","description":"An array of search configurations used to query and retrieve data from NetSuite.\nEach search object defines a saved search or ad-hoc query configuration.\n\n**Structure**\nEach item in the array is an object with the following properties:\n- savedSearchId: The internal ID of a saved search in NetSuite (string)\n- recordType: The NetSuite record type being searched (string, e.g., \"customer\", \"salesorder\")\n- criteria: Array of search criteria/filters (optional)\n\n**Examples**\n```json\n[\n  {\n    \"savedSearchId\": \"10\",\n    \"recordType\": \"customer\",\n    \"criteria\": []\n  }\n]\n```\n\n**Implementation guidance**\n- Use savedSearchId to reference an existing saved search in NetSuite\n- recordType should match a valid NetSuite record type\n- criteria can be used to add additional filters to the search","items":{"type":"object","properties":{"savedSearchId":{"type":"string","description":"The internal ID of a saved search in NetSuite"},"recordType":{"type":"string","description":"The lowercase script ID of the NetSuite record type being searched.\n\nMust be the exact lowercase script ID as defined in NetSuite (e.g., \"customer\", \"salesorder\", \"invoice\", \"vendorbill\").\nThis is NOT the display name - use the script ID which is always lowercase with no spaces."},"criteria":{"type":"array","description":"Array of search criteria/filters to apply","items":{"type":"object"}}}}},"metadata":{"type":"object","description":"metadata: >\n  A collection of key-value pairs that provide additional contextual information about the NetSuite entity. This metadata can include custom attributes, tags, or any supplementary data that helps to describe, categorize, or operationally enhance the entity beyond its standard properties. It serves as an extensible mechanism to store user-defined or system-generated information that is not part of the core entity schema, enabling greater flexibility and customization in managing NetSuite data.\n  **Field behavior**\n  - Stores arbitrary additional information related to the NetSuite entity, enhancing its descriptive or operational context.\n  - Can include custom fields defined by users, system-generated tags, flags, timestamps, or nested structured data.\n  - Typically represented as a dictionary or map with string keys and values that may be strings, numbers, booleans, arrays, or nested objects.\n  - Metadata entries are optional and do not affect the core entity behavior unless explicitly integrated with business logic.\n  - Supports dynamic addition, update, and removal of metadata entries without impacting the primary entity schema.\n  **Implementation guidance**\n  - Ensure all metadata keys are unique within the collection to prevent accidental overwrites.\n  - Support flexible and heterogeneous value types, including primitive types and nested structures, to accommodate diverse metadata needs.\n  - Validate keys and values against naming conventions, length restrictions, and allowed character sets to maintain consistency and prevent errors.\n  - Implement efficient mechanisms for CRUD (Create, Read, Update, Delete) operations on metadata entries to facilitate easy management.\n  - Consider indexing frequently queried metadata keys for performance optimization.\n  - Provide clear documentation or schema definitions for any standardized or commonly used metadata keys.\n  **Examples**\n  - {\"department\": \"Sales\", \"region\": \"EMEA\", \"priority\": \"high\"}\n  - {\"customField1\": \"value1\", \"customFlag\": true}\n  - {\"tags\": [\"urgent\", \"review\"], \"lastUpdatedBy\": \"user123\"}\n  - {\"approvalStatus\": \"pending\", \"reviewCount\": 3, \"metadataVersion\": 2}\n  - {\"nestedInfo\": {\"createdBy\": \"admin\", \"createdAt\": \"2024-05-01T12:00:00Z\"}}\n  **Important notes**\n  - Metadata is optional and should not interfere with the core functionality or validation of the NetSuite entity.\n  - Modifications to metadata typically do not trigger business workflows or logic unless explicitly configured to do so."},"selectoption":{"type":"object","description":"selectoption: >\n  Represents a selectable option within a NetSuite field, typically used in dropdown menus, radio buttons, or other selection controls. Each selectoption consists of a user-friendly label and an associated value that uniquely identifies the option internally. This structure enables consistent data entry, filtering, and categorization within NetSuite forms and records.\n  **Field behavior**\n  - Defines a single, discrete choice available to users in selection interfaces such as dropdowns, radio buttons, or multi-select lists.\n  - Can be part of a collection of options presented to the user for making a selection.\n  - Includes both a display label (visible to users) and a corresponding value (used internally or in API interactions).\n  - Supports filtering, categorization, and conditional logic based on the selected option.\n  - May be dynamically generated or statically defined depending on the field configuration.\n  **Implementation guidance**\n  - Assign a unique and stable value to each selectoption to prevent ambiguity and maintain data integrity.\n  - Use clear, concise, and user-friendly labels that accurately describe the option’s meaning.\n  - Validate option values against expected data types and formats to ensure compatibility with backend processing.\n  - Implement localization strategies for labels to support multiple languages without altering the underlying values.\n  - Consistently apply selectoption structures across all fields requiring predefined choices to standardize user experience.\n  - Consider accessibility best practices when designing labels and selection controls.\n  **Examples**\n  - { label: \"Active\", value: \"1\" }\n  - { label: \"Inactive\", value: \"2\" }\n  - { label: \"Pending Approval\", value: \"3\" }\n  - { label: \"High Priority\", value: \"high\" }\n  - { label: \"Low Priority\", value: \"low\" }\n  **Important notes**\n  - The label is intended for display purposes and may be localized; the value is the definitive identifier used in data processing and API calls.\n  - Values should remain consistent over time to avoid breaking integrations or corrupting data.\n  - When supporting multiple languages, labels should be translated appropriately while keeping values unchanged.\n  - Changes to selectoption values or labels should be managed carefully to prevent unintended side effects.\n  - Selectoption entries may be influenced by the context of the parent record, user roles, or permissions.\n  **Dependency chain**\n  - Utilized within field definitions that support selection inputs (e"},"customFieldMetadata":{"type":"object","description":"customFieldMetadata: Metadata information related to custom fields defined within the NetSuite environment, providing comprehensive details about each custom field's configuration, behavior, and constraints to facilitate accurate data handling and UI generation.\n**Field behavior**\n- Contains detailed metadata about custom fields, including their definitions, types, configurations, and constraints.\n- Provides contextual information necessary for understanding, validating, and manipulating custom fields programmatically.\n- May include attributes such as field ID, label, data type, default values, validation rules, display settings, sourcing information, and field dependencies.\n- Used to dynamically interpret or generate UI elements, data validation logic, or data structures based on custom field configurations.\n- Reflects the current state of custom fields as defined in the NetSuite account, enabling synchronization between the API consumer and the NetSuite environment.\n**Implementation guidance**\n- Ensure that the metadata accurately reflects the current state of custom fields in the NetSuite account by synchronizing regularly or on configuration changes.\n- Update the metadata whenever custom fields are added, modified, or removed to maintain consistency and prevent data integrity issues.\n- Use this metadata to validate input data against custom field constraints (e.g., data type, required status, allowed values) before processing or submission.\n- Consider caching metadata for performance optimization but implement mechanisms to refresh it periodically or on-demand to capture updates.\n- Handle cases where customFieldMetadata might be null, incomplete, or partially loaded gracefully, including fallback logic or error handling.\n- Respect user permissions and access controls when retrieving or exposing custom field metadata to ensure compliance with security policies.\n**Examples**\n- A custom field metadata object describing a custom checkbox field with ID \"custfield_123\", label \"Approved\", default value false, and display type \"inline\".\n- Metadata for a custom list/record field specifying the list of valid options, their internal IDs, and whether multiple selections are allowed.\n- Information about a custom date field including its date format, minimum and maximum allowed dates, and any validation rules applied.\n- Metadata describing a custom currency field with precision settings and default currency.\n**Important notes**\n- The structure and content of customFieldMetadata may vary depending on the NetSuite configuration, customizations, and API version.\n- Access to custom field metadata may require appropriate permissions within the NetSuite environment; unauthorized access may result in incomplete or no metadata.\n- Changes to custom fields in NetSuite (such as renaming, deleting, or changing data types) can impact the metadata and"},"skipGrouping":{"type":"boolean","description":"skipGrouping: Indicates whether to bypass the grouping of related records or transactions during processing, allowing each item to be handled individually rather than aggregated into groups.\n\n**Field behavior**\n- When set to true, the system processes each record or transaction independently, without combining them into groups based on shared attributes.\n- When set to false or omitted, related records or transactions are aggregated according to predefined grouping criteria (e.g., by customer, date, or transaction type) before processing.\n- Influences how data is structured, summarized, and reported in outputs or passed to downstream systems.\n- Affects the level of detail and granularity available in the processed data.\n\n**Implementation guidance**\n- Utilize this flag to control processing granularity, especially when detailed, record-level analysis or reporting is required.\n- Confirm that downstream systems, reports, or integrations can accommodate ungrouped data if skipGrouping is enabled.\n- Assess the potential impact on system performance and data volume, as disabling grouping may significantly increase the number of processed items.\n- Consider the use case carefully: grouping is generally preferred for summary reports, while skipping grouping suits detailed audits or troubleshooting.\n\n**Examples**\n- skipGrouping: true — Processes each transaction separately, providing detailed, unaggregated data.\n- skipGrouping: false — Groups transactions by customer or date, producing summarized results.\n- skipGrouping omitted — Defaults to grouping enabled, aggregating related records.\n\n**Important notes**\n- Enabling skipGrouping can lead to increased processing time, higher memory usage, and larger output datasets.\n- Some reports, dashboards, or integrations may require grouped data; verify compatibility before enabling this option.\n- The default behavior is typically grouping enabled (skipGrouping = false) unless explicitly overridden.\n- Changes to this setting may affect data consistency and comparability with previously generated reports.\n\n**Dependency chain**\n- Often depends on other properties that define grouping keys or criteria (e.g., groupBy fields).\n- May interact with filtering, sorting, or pagination settings within the processing pipeline.\n- Could influence or be influenced by aggregation functions or summary calculations applied downstream.\n\n**Technical details**\n- Data type: Boolean.\n- Default value: false (grouping enabled).\n- Implemented as a conditional flag checked during the data aggregation phase.\n- When true, bypasses aggregation logic and processes each record individually.\n- Typically integrated into the processing workflow to toggle between grouped and ungrouped data handling."},"statsOnly":{"type":"boolean","description":"statsOnly indicates whether the API response should include only aggregated statistical summary data without any detailed individual records. This property is used to optimize response size and improve performance when detailed data is unnecessary.\n\n**Field behavior**\n- When set to true, the API returns only summary statistics such as counts, averages, sums, or other aggregate metrics.\n- When set to false or omitted, the response includes both detailed data records and the associated statistical summaries.\n- Helps reduce network bandwidth and processing time by excluding verbose record-level data.\n- Primarily intended for use cases like dashboards, reports, or monitoring tools where only high-level metrics are required.\n\n**Implementation guidance**\n- Default the value to false to ensure full data retrieval unless explicitly requesting summary-only data.\n- Validate that the input is a boolean to prevent unexpected API behavior.\n- Use this flag selectively in scenarios where detailed records are not needed to avoid loss of critical information.\n- Ensure the API endpoint supports this flag before usage, as some endpoints may not implement statsOnly functionality.\n- Adjust client-side logic to handle different response structures depending on the flag’s value.\n\n**Examples**\n- statsOnly: true — returns only aggregated statistics such as total counts, averages, or sums without any detailed entries.\n- statsOnly: false — returns full detailed data records along with statistical summaries.\n- statsOnly omitted — defaults to false, returning detailed data and statistics.\n\n**Important notes**\n- Enabling statsOnly disables access to individual record details, which may limit in-depth data analysis.\n- The response schema changes significantly when statsOnly is true; clients must handle these differences gracefully.\n- Some API endpoints may not support this property; verify compatibility in the API documentation.\n- Pagination parameters may be ignored or behave differently when statsOnly is enabled, since detailed records are excluded.\n\n**Dependency chain**\n- May interact with filtering, sorting, or date range parameters that influence the statistical data returned.\n- Can affect pagination logic because detailed records are omitted when statsOnly is true.\n- Dependent on the API endpoint’s support for summary-only responses.\n\n**Technical details**\n- Data type: Boolean.\n- Default value: false.\n- Typically implemented as a query parameter or part of the request payload depending on API design.\n- Alters the response payload structure by excluding detailed record arrays and including only aggregated metrics.\n- Helps optimize API performance and reduce response payload size in scenarios where detailed data is unnecessary."},"internalId":{"type":"string","description":"The internal ID of a specific file in the NetSuite file cabinet to export.\n\n**Critical:** Required for blob exports\n\nThis property is REQUIRED when the export type is \"blob\". For blob exports, you must specify the internalId of the file to export from the NetSuite file cabinet.\n\n**Field behavior**\n- Identifies a specific file in the NetSuite file cabinet by its internal ID\n- Required for blob exports (raw binary file transfers without parsing)\n- The file at this internal ID will be exported as-is without parsing\n\n**Implementation guidance**\n- For blob exports: Set netsuite.type to \"blob\" (on the export, not netsuite) and provide netsuite.internalId\n- Obtain the file's internalId from NetSuite's file cabinet or via API\n- Validate the internalId corresponds to an existing file before export\n\n**Examples**\n- \"12345\" - Internal ID of a specific file\n- \"67890\" - Another file internal ID\n\n**Important notes**\n- This is different from netsuite.file.folderInternalId which specifies a folder for file exports with parsing\n- For blob exports: Use netsuite.internalId (file ID)\n- For file exports with parsing: Use netsuite.file.folderInternalId (folder ID)"},"blob":{"type":"object","properties":{"purgeFileAfterExport":{"type":"boolean","description":"purgeFileAfterExport: Whether to delete the file from the system after it has been successfully exported. This property controls the automatic removal of the source file post-export to help manage storage and maintain system cleanliness.\n\n**Field behavior**\n- Determines if the exported file should be removed from the source storage immediately after a successful export operation.\n- When set to `true`, the system deletes the file as soon as the export completes without errors.\n- When set to `false` or omitted, the file remains intact in its original location after export.\n- Facilitates automated cleanup of files to prevent unnecessary storage consumption.\n- Does not affect the export process itself; deletion occurs only after confirming export success.\n\n**Implementation guidance**\n- Confirm that the export operation has fully completed and succeeded before initiating file deletion to avoid data loss.\n- Verify that the executing user or system has sufficient permissions to delete files from the source location.\n- Assess downstream workflows or processes that might require access to the file after export before enabling purging.\n- Implement logging or notification mechanisms to record when files are purged for audit trails and troubleshooting.\n- Consider integrating with retention policies or backup systems to prevent accidental loss of important data.\n\n**Examples**\n- `true` — The file will be deleted immediately after a successful export.\n- `false` — The file will remain in the source location after export.\n- Omitted or `null` — Defaults to `false`, meaning the file is retained post-export.\n\n**Important notes**\n- File deletion is permanent and cannot be undone; ensure that the file is no longer needed before enabling this option.\n- Use caution in multi-user or multi-process environments where files may be shared or required beyond the export operation.\n- Immediate purging may interfere with backup, archival, or compliance requirements if files are deleted too soon.\n- Consider implementing safeguards or confirmation steps if enabling automatic purging in production environments.\n\n**Dependency chain**\n- Relies on the successful completion of the export operation to trigger file deletion.\n- Dependent on file system permissions and access controls to allow deletion.\n- May be affected by other system settings related to file retention, archival, or cleanup policies.\n- Could interact with error handling mechanisms to prevent deletion if export fails or is incomplete.\n\n**Technical details**\n- Typically represented as a boolean value (`true` or `false`).\n- Default behavior is to retain files unless explicitly set to `true`.\n- Deletion should be performed using secure"}},"description":"blob: Configuration for retrieving raw binary files from NetSuite file cabinet WITHOUT parsing them into records. Use this for binary file transfers (images, PDFs, executables) where the file content should be transferred as-is.\n\n**Critical:** Blob export configuration\n\nFor blob exports, configure:\n1. Set the export's top-level `type` to \"blob\"\n2. Set `netsuite.internalId` to the file's internal ID\n3. Leave `netsuite.type` blank/null (do NOT set it to \"file\")\n4. Optionally configure `netsuite.blob.purgeFileAfterExport`\n\n**When to use blob vs file**\n- Blob exports: Raw binary transfer WITHOUT parsing - leave netsuite.type blank\n- File exports: Parse file contents into records - set netsuite.type to \"file\"\n\nDo NOT use blob configuration when you want file content parsed into data records.\n\n**Field behavior**\n- Stores raw binary data including files, images, audio, video, or any non-textual content.\n- Supports download operations for binary content from the NetSuite file cabinet.\n- File content is transferred as-is without any parsing or transformation.\n- May be immutable or mutable depending on the specific NetSuite entity and operation.\n- Requires careful handling to maintain data integrity during transmission and storage.\n\n**Implementation guidance**\n- Always encode binary data (e.g., using base64) when transmitting over text-based protocols such as JSON or XML to ensure data integrity.\n- Validate the size of the blob against NetSuite API limits and storage constraints to prevent errors or truncation.\n- Implement secure handling practices, including encryption in transit and at rest, to protect sensitive binary data.\n- Use appropriate MIME/content-type headers when uploading or downloading blobs to correctly identify the data format.\n- Consider chunked uploads/downloads or streaming for large blobs to optimize performance and resource usage.\n- Ensure consistent encoding and decoding mechanisms between client and server to avoid data corruption.\n\n**Examples**\n- A base64-encoded PDF document attached to a NetSuite customer record.\n- An image file (PNG or JPEG) stored as a blob for product catalog entries.\n- A binary export of transaction data in a proprietary format used for integration with external systems.\n- Audio or video files associated with marketing campaigns or training materials.\n- Encrypted binary blobs containing sensitive configuration or credential data.\n\n**Important notes**\n- Blob size may be limited by NetSuite API constraints or underlying storage capabilities; exceeding these limits can cause failures.\n- Encoding and decoding must be consistent and correctly implemented to prevent data corruption or loss.\n- Large blobs should be handled using chunked or streamed transfers to avoid memory issues and improve reliability.\n- Security is paramount; blobs may contain sensitive information requiring encryption and strict access controls.\n- Access to blob data typically requires proper authentication and authorization aligned with NetSuite’s security model.\n\n**Dependency chain**\n- Dependent on authentication and authorization mechanisms"},"restlet":{"type":"object","properties":{"recordType":{"type":"string","description":"recordType specifies the type of NetSuite record that the RESTlet will interact with. This property determines the schema, validation rules, and operations applicable to the record within the NetSuite environment, directly influencing how data is processed and managed by the RESTlet.\n\n**Field behavior**\n- Defines the specific NetSuite record type (e.g., customer, salesOrder, invoice) targeted by the RESTlet.\n- Influences the structure and format of data payloads sent to and received from the RESTlet.\n- Controls validation rules, mandatory fields, and available operations based on the selected record type.\n- Affects permissions and access controls enforced during RESTlet execution, ensuring compliance with NetSuite security settings.\n- Determines the applicable business logic and workflows triggered by the RESTlet for the specified record type.\n\n**Implementation guidance**\n- Use the exact internal ID or script ID of the NetSuite record type as recognized by the NetSuite system to ensure accurate targeting.\n- Validate the recordType value against the list of supported NetSuite record types to prevent runtime errors and ensure compatibility.\n- Confirm that the RESTlet script has the necessary permissions and roles assigned to access and manipulate the specified record type.\n- When handling multiple record types dynamically, implement conditional logic to accommodate differences in data structure and processing requirements.\n- For custom record types, always use the script ID format (e.g., \"customrecord_myCustomRecord\") to avoid ambiguity.\n- Test RESTlet behavior thoroughly after changing the recordType to verify correct handling of data and operations.\n\n**Examples**\n- \"customer\"\n- \"salesOrder\"\n- \"invoice\"\n- \"employee\"\n- \"customrecord_myCustomRecord\"\n- \"vendor\"\n- \"purchaseOrder\"\n\n**Important notes**\n- The recordType must correspond to a valid and supported NetSuite record type; invalid values will cause API calls to fail.\n- Custom record types require referencing by their script IDs, which typically start with \"customrecord_\".\n- Modifying the recordType may necessitate updates to the RESTlet’s codebase to handle different data schemas and business logic.\n- Permissions and role restrictions in NetSuite can limit access to certain record types, impacting RESTlet functionality.\n- Consistency in recordType usage is critical for maintaining data integrity and predictable RESTlet behavior.\n\n**Dependency chain**\n- Depends on the NetSuite environment’s available record types and their configurations.\n- Influences the RESTlet’s data validation, processing logic, and response formatting.\n-"},"searchId":{"type":"string","description":"searchId: The unique identifier for a saved search in NetSuite, used to specify which saved search the RESTlet should execute. This ID corresponds to the internal ID assigned to saved searches within the NetSuite system. It enables the RESTlet to run predefined queries and retrieve data based on the saved search’s criteria and configuration.\n\n**Field behavior**\n- Specifies the exact saved search to be executed by the RESTlet.\n- Must correspond to a valid and existing saved search internal ID within the NetSuite account.\n- Determines the dataset and filters applied when retrieving search results.\n- Typically required when invoking the RESTlet to perform search operations.\n- Influences the structure and content of the response based on the saved search definition.\n\n**Implementation guidance**\n- Verify that the searchId matches an existing saved search internal ID in the target NetSuite environment.\n- Validate the searchId format and existence before making the RESTlet call to prevent runtime errors.\n- Use the internal ID as a string or numeric value consistent with NetSuite’s conventions.\n- Implement error handling for scenarios where the searchId is invalid, missing, or inaccessible due to permission restrictions.\n- Ensure the integration role or user has appropriate permissions to access and execute the saved search.\n- Consider caching or documenting frequently used searchIds to improve maintainability.\n\n**Examples**\n- \"1234\" — a numeric internal ID representing a specific saved search.\n- \"5678\" — another valid saved search internal ID.\n- \"1001\" — an example of a saved search ID used to retrieve customer records.\n- \"2002\" — a saved search ID configured to return transaction data.\n\n**Important notes**\n- The searchId must be accessible by the user or integration role making the RESTlet call; otherwise, access will be denied.\n- Providing an incorrect or non-existent searchId will result in errors or empty search results.\n- Permissions and sharing settings on the saved search directly affect the data returned by the RESTlet.\n- The saved search must be properly configured with the desired filters, columns, and criteria to ensure meaningful results.\n- Changes to the saved search (e.g., modifying filters or columns) will impact the RESTlet output without changing the searchId.\n\n**Dependency chain**\n- Depends on the existence of a saved search configured in the NetSuite account.\n- Requires appropriate user or integration role permissions to access the saved search.\n- Relies on the saved search’s configuration (filters, columns, criteria) to"},"useSS2Restlets":{"type":"boolean","description":"useSS2Restlets: >\n  Specifies whether to use SuiteScript 2.0 RESTlets for API interactions instead of SuiteScript 1.0 RESTlets. This setting controls the version of RESTlets invoked during API communication with NetSuite, impacting compatibility, performance, and available features.\n  **Field behavior**\n  - Determines the RESTlet version used for all API interactions within the NetSuite integration.\n  - When set to `true`, the system exclusively uses SuiteScript 2.0 RESTlets.\n  - When set to `false` or omitted, SuiteScript 1.0 RESTlets are used by default.\n  - Influences the structure, capabilities, and response formats of API calls.\n  **Implementation guidance**\n  - Enable this flag to take advantage of SuiteScript 2.0’s improved modularity, asynchronous capabilities, and modern JavaScript syntax.\n  - Verify that SuiteScript 2.0 RESTlets are properly deployed, configured, and accessible in the target NetSuite environment before enabling.\n  - Conduct comprehensive testing to ensure existing integrations and workflows remain functional when switching from SuiteScript 1.0 to 2.0 RESTlets.\n  - Coordinate with NetSuite administrators and developers to update or rewrite RESTlets if necessary.\n  **Examples**\n  - `true` — API calls will utilize SuiteScript 2.0 RESTlets, enabling modern scripting features.\n  - `false` — API calls will continue using legacy SuiteScript 1.0 RESTlets for backward compatibility.\n  **Important notes**\n  - SuiteScript 2.0 RESTlets support modular script architecture and ES6+ JavaScript features, improving maintainability and performance.\n  - Legacy RESTlets written in SuiteScript 1.0 may not be compatible with SuiteScript 2.0; migration or parallel support might be required.\n  - Switching RESTlet versions can change API response formats and behaviors, potentially impacting downstream systems.\n  - Ensure proper version control and rollback plans are in place when changing this setting.\n  **Dependency chain**\n  - Depends on the deployment and availability of SuiteScript 2.0 RESTlets within the NetSuite account.\n  - Requires that the API client and integration logic support the RESTlet version selected.\n  - May depend on other configuration settings related to authentication and script permissions.\n  **Technical details**\n  - SuiteScript 2.0 RESTlets use the AMD module format and support"},"restletVersion":{"type":"object","properties":{"type":{"type":"string","description":"The type property specifies the version type of the NetSuite Restlet being used. It determines the specific version or variant of the Restlet API that the integration will interact with, ensuring compatibility and correct functionality. This property is essential for correctly routing requests, handling responses, and maintaining alignment with the expected API contract for the chosen Restlet version.\n\n**Field behavior**\n- Defines the version category or variant of the NetSuite Restlet API.\n- Influences request formatting, response parsing, and available features.\n- Determines which API endpoints and methods are accessible.\n- May impact authentication mechanisms and data serialization formats.\n- Ensures that the integration communicates with the correct Restlet version to prevent incompatibility issues.\n\n**Implementation guidance**\n- Use only predefined and officially supported Restlet version types provided by NetSuite.\n- Validate the type value against the current list of supported Restlet versions before deployment.\n- Update the type property when upgrading to a newer Restlet version or switching to a different variant.\n- Include the type property within the restletVersion object to explicitly specify the API version.\n- Coordinate changes to this property with client applications and integration workflows to maintain compatibility.\n- Monitor NetSuite release notes and documentation for any changes or deprecations related to Restlet versions.\n\n**Examples**\n- \"1.0\" — specifying the stable Restlet API version 1.0.\n- \"2.0\" — specifying the newer Restlet API version 2.0 with enhanced features.\n- \"beta\" — indicating a beta or experimental Restlet version for testing purposes.\n- \"custom\" — representing a custom or extended Restlet version tailored for specific use cases.\n\n**Important notes**\n- Providing an incorrect or unsupported type value can cause API calls to fail or behave unpredictably.\n- The type must be consistent with the NetSuite environment configuration and deployment settings.\n- Changing the type may necessitate updates in client-side code, authentication flows, and data handling logic.\n- Always consult the latest official NetSuite documentation to verify supported Restlet versions and their characteristics.\n- The type property is critical for maintaining long-term integration stability and compatibility.\n\n**Dependency chain**\n- Depends on the restlet object to define the overall Restlet configuration.\n- Influences other properties related to authentication, endpoint URLs, and data formats within the restletVersion context.\n- May affect downstream processing components that rely on version-specific behaviors.\n\n**Technical details**\n- Typically represented as a string value"},"enum":{"type":"array","items":{"type":"object"},"description":"A list of predefined string values that the `restletVersion` property can accept, representing the supported versions of the NetSuite RESTlet API. This enumeration restricts the input to specific allowed versions to ensure compatibility, prevent invalid version assignments, and facilitate validation and user interface enhancements.\n\n**Field behavior**\n- Defines the complete set of valid version identifiers for the `restletVersion` property.\n- Ensures that only officially supported RESTlet API versions can be selected or submitted.\n- Enables validation mechanisms to reject unsupported or malformed version inputs.\n- Supports auto-completion and dropdown selections in user interfaces and API clients.\n- Helps maintain consistency and compatibility across different API integrations and deployments.\n\n**Implementation guidance**\n- Populate the enum with all currently supported NetSuite RESTlet API versions as defined by official documentation.\n- Regularly update the enum values to reflect newly released versions or deprecated ones.\n- Use clear and consistent string formats that match the official versioning scheme (e.g., semantic versions like \"1.0\", \"2.0\" or date-based versions like \"2023.1\").\n- Implement strict validation logic to reject any input not included in the enum.\n- Consider backward compatibility when adding or removing enum values to avoid breaking existing integrations.\n\n**Examples**\n- [\"1.0\", \"2.0\", \"2.1\"]\n- [\"2023.1\", \"2023.2\", \"2024.1\"]\n- [\"v1\", \"v2\", \"v3\"] (if applicable based on versioning scheme)\n\n**Important notes**\n- Enum values must strictly align with the official NetSuite RESTlet API versioning scheme to ensure correctness.\n- Using a version value outside this enum should trigger a validation error and prevent API calls or configuration saves.\n- The enum acts as a safeguard against runtime errors caused by unsupported or invalid version usage.\n- Changes to this enum should be communicated clearly to all API consumers to manage version compatibility.\n\n**Dependency chain**\n- This enum is directly associated with and constrains the `restletVersion` property.\n- Updates to supported RESTlet API versions necessitate corresponding updates to this enum.\n- Validation logic and UI components rely on this enum to enforce version correctness.\n- Downstream processes that depend on the `restletVersion` value are indirectly dependent on this enum’s accuracy.\n\n**Technical details**\n- Implemented as a string enumeration type within the API schema.\n- Used by validation middleware or schema"},"lowercase":{"type":"object","description":"lowercase: Specifies whether the restlet version string should be converted to lowercase characters to ensure consistent formatting.\n\n**Field behavior**\n- Determines if the restlet version identifier is transformed entirely to lowercase characters.\n- When set to true, the version string is converted to lowercase before any further processing or output.\n- When set to false or omitted, the version string retains its original casing as provided.\n- Affects only the textual representation of the version string, not its underlying value or meaning.\n\n**Implementation guidance**\n- Use this property to enforce uniform casing for version strings, particularly when interacting with case-sensitive systems or APIs.\n- Validate that the value assigned is a boolean (true or false).\n- Define a default behavior (commonly false) when the property is not explicitly set.\n- Apply the lowercase transformation early in the processing pipeline to maintain consistency.\n- Ensure that downstream components respect the transformed casing if this property is enabled.\n\n**Examples**\n- `true` — The version string \"V1.0\" is converted and output as \"v1.0\".\n- `false` — The version string \"V1.0\" remains unchanged as \"V1.0\".\n- Property omitted — The version string casing remains as originally provided.\n\n**Important notes**\n- Altering the casing of the version string may impact integrations with external systems that are case-sensitive.\n- Confirm compatibility with all consumers of the version string before enabling this property.\n- This property does not modify the semantic meaning or version number, only its textual case.\n- Should be used consistently across all instances where the version string is handled to avoid discrepancies.\n\n**Dependency chain**\n- Requires a valid restlet version string to perform the lowercase transformation.\n- May be used in conjunction with other formatting or validation properties related to the restlet version.\n- The effect of this property should be considered when performing version comparisons or logging.\n\n**Technical details**\n- Implemented as a boolean flag controlling whether to apply a lowercase function to the version string.\n- The transformation typically involves invoking a standard string lowercase method/function.\n- Should be executed before any version string comparisons, storage, or output operations.\n- Does not affect the internal representation of the version beyond its string casing."}},"description":"restletVersion specifies the version of the NetSuite Restlet script to be used for the API call. This property determines which version of the Restlet script is invoked, ensuring compatibility and proper execution of the request. It allows precise control over which iteration of the Restlet logic is executed, facilitating version management and smooth transitions between script updates.\n\n**Field behavior**\n- Defines the specific version of the Restlet script to target for the API request.\n- Influences the behavior, output, and compatibility of the API response based on the selected script version.\n- Enables management of multiple Restlet script versions within the same NetSuite environment.\n- Ensures that the API call executes the intended logic corresponding to the specified version.\n- Helps prevent conflicts or errors arising from script changes or updates.\n\n**Implementation guidance**\n- Set this property to exactly match the version identifier of the deployed Restlet script in NetSuite.\n- Confirm that the specified version is properly deployed and active in the NetSuite account before use.\n- Adopt a clear and consistent versioning scheme (e.g., semantic versioning, date-based, or custom tags) to avoid ambiguity.\n- Update this property whenever switching to a newer or different Restlet script version to reflect the intended logic.\n- Validate the version string format to prevent malformed or unsupported values.\n- Coordinate version updates with deployment and testing processes to ensure smooth transitions.\n\n**Examples**\n- \"1.0\"\n- \"2.1\"\n- \"2023.1\"\n- \"v3\"\n- \"release-2024-06\"\n\n**Important notes**\n- Specifying an incorrect or non-existent version will cause the API call to fail or produce unexpected results.\n- Proper versioning supports backward compatibility and controlled feature rollouts.\n- Always verify the Restlet script version in the NetSuite environment before making API calls.\n- Version mismatches can lead to errors, data inconsistencies, or unsupported operations.\n- This property is critical for environments where multiple Restlet versions coexist.\n\n**Dependency chain**\n- Depends on the Restlet scripts deployed and versioned within the NetSuite environment.\n- Related to the authentication and authorization context of the API call, as permissions may vary by script version.\n- Works in conjunction with other NetSuite API properties such as scriptId and deploymentId to fully identify the target Restlet.\n- May interact with environment-specific configurations or feature flags tied to particular versions.\n\n**Technical details**\n- Typically represented as a string"},"criteria":{"type":"object","properties":{"type":{"type":"array","items":{"type":"object"},"description":"The type property specifies the category or classification of the criteria used within the NetSuite RESTlet API. It defines the nature or kind of the criteria being applied to filter or query data, enabling precise targeting of records based on their domain or entity type.\n\n**Field behavior**\n- Determines the specific category or classification of the criteria.\n- Influences how the criteria is interpreted and processed by the API.\n- Helps in filtering or querying data based on the defined type.\n- Typically expects a predefined set of values corresponding to valid criteria types.\n- Acts as a key discriminator that guides the API in applying the correct schema and validation rules for the criteria.\n- May affect the available fields and operators applicable within the criteria.\n\n**Implementation guidance**\n- Validate the input against the allowed set of criteria types to ensure correctness and prevent errors.\n- Use consistent, descriptive, and case-sensitive naming conventions for the type values as defined by NetSuite.\n- Ensure that the specified type aligns with the corresponding criteria structure and expected data fields.\n- Document all possible values and their meanings clearly for API consumers to facilitate correct usage.\n- Implement error handling to provide meaningful feedback when unsupported or invalid types are supplied.\n- Keep the list of valid types updated in accordance with changes in NetSuite API versions and account configurations.\n\n**Examples**\n- \"customer\" — to specify criteria related to customer records.\n- \"transaction\" — to filter based on transaction data such as sales orders or invoices.\n- \"item\" — to apply criteria on inventory or product items.\n- \"employee\" — to target employee-related data.\n- \"vendor\" — to filter vendor or supplier records.\n- \"customrecord_xyz\" — to specify criteria for a custom record type identified by its script ID.\n\n**Important notes**\n- The type value directly affects the behavior of the criteria and the resulting data set.\n- Incorrect or unsupported type values may lead to API errors, empty results, or unexpected behavior.\n- The set of valid types may vary depending on the NetSuite account configuration, customizations, and API version.\n- Always refer to the latest official NetSuite API documentation and your account’s schema for supported types.\n- Some types may require additional permissions or roles to access the corresponding data.\n- The type property is mandatory for criteria filtering to function correctly.\n\n**Dependency chain**\n- Depends on the overall criteria object structure within NetSuite.restlet.criteria.\n- Influences the available fields, operators, and values within the criteria definition"},"join":{"type":"string","description":"join: Specifies the criteria used to join related records or tables in a NetSuite RESTlet query, enabling the retrieval of data based on relationships between different record types.\n**Field behavior**\n- Defines the relationship or link between the primary record and a related record for filtering or data retrieval.\n- Determines how records from different tables are combined based on matching fields.\n- Supports nested joins to allow complex queries involving multiple related records.\n**Implementation guidance**\n- Use valid join names as defined in NetSuite’s schema or documentation for the specific record types.\n- Ensure the join criteria align with the intended relationship to avoid incorrect or empty query results.\n- Combine with appropriate filters on the joined records to refine query results.\n- Validate join paths to prevent errors during query execution.\n**Examples**\n- Joining a customer record to its related sales orders using \"salesOrder\" as the join.\n- Using \"item\" join to filter transactions based on item attributes.\n- Nested join example: joining from a sales order to its customer and then to the customer’s address.\n**Important notes**\n- Incorrect join names or paths can cause query failures or unexpected results.\n- Joins may impact query performance; use them judiciously.\n- Not all record types support all possible joins; consult NetSuite documentation.\n- Joins are case-sensitive and must match NetSuite’s API specifications exactly.\n**Dependency chain**\n- Depends on the base record type specified in the query.\n- Works in conjunction with filter criteria to refine results.\n- May depend on authentication and permissions to access related records.\n**Technical details**\n- Typically represented as a string or object indicating the join path.\n- Used within the criteria object of a RESTlet query payload.\n- Supports multiple levels of nesting for complex joins.\n- Must conform to NetSuite’s SuiteScript or RESTlet API join syntax and conventions."},"operator":{"type":"string","description":"operator: >\n  Specifies the comparison operator used to evaluate the criteria in a NetSuite RESTlet request.\n  This operator determines how the field value is compared against the specified criteria value(s) to filter or query records.\n  **Field behavior**\n  - Defines the type of comparison between a field and a value (e.g., equality, inequality, greater than).\n  - Influences the logic of the criteria evaluation in RESTlet queries.\n  - Supports various operators such as equals, not equals, greater than, less than, contains, etc.\n  **Implementation guidance**\n  - Use valid NetSuite-supported operators to ensure correct query behavior.\n  - Match the operator type with the data type of the field being compared (e.g., use numeric operators for numeric fields).\n  - Combine multiple criteria with appropriate logical operators if needed.\n  - Validate operator values to prevent errors in RESTlet execution.\n  **Examples**\n  - \"operator\": \"is\" (checks if the field value is equal to the specified value)\n  - \"operator\": \"isnot\" (checks if the field value is not equal to the specified value)\n  - \"operator\": \"greaterthan\" (checks if the field value is greater than the specified value)\n  - \"operator\": \"contains\" (checks if the field value contains the specified substring)\n  **Important notes**\n  - The operator must be compatible with the field type and the value provided.\n  - Incorrect operator usage can lead to unexpected query results or errors.\n  - Operators are case-sensitive and should match NetSuite's expected operator strings.\n  **Dependency chain**\n  - Depends on the field specified in the criteria to determine valid operators.\n  - Works in conjunction with the criteria value(s) to form a complete condition.\n  - May be combined with logical operators when multiple criteria are used.\n  **Technical details**\n  - Typically represented as a string value in the RESTlet criteria JSON object.\n  - Supported operators align with NetSuite's SuiteScript search operators.\n  - Must conform to the list of operators recognized by the NetSuite RESTlet API."},"searchValue":{"type":"object","description":"searchValue: The value used as the search criterion to filter results in the NetSuite RESTlet API. This value is matched against the specified search field to retrieve relevant records based on the search parameters provided.\n**Field behavior**\n- Acts as the primary input for filtering search results.\n- Supports various data types depending on the search field (e.g., string, number, date).\n- Used in conjunction with other search criteria to refine query results.\n- Can be a partial or full match depending on the search configuration.\n**Implementation guidance**\n- Ensure the value type matches the expected type of the search field.\n- Validate the input to prevent injection attacks or malformed queries.\n- Use appropriate encoding if the value contains special characters.\n- Combine with logical operators or additional criteria for complex searches.\n**Examples**\n- \"Acme Corporation\" for searching customer names.\n- 1001 for searching by internal record ID.\n- \"2024-01-01\" for searching records created on or after a specific date.\n- \"Pending\" for filtering records by status.\n**Important notes**\n- The effectiveness of the search depends on the accuracy and format of the searchValue.\n- Case sensitivity may vary based on the underlying NetSuite configuration.\n- Large or complex search values may impact performance.\n- Null or empty values may result in no filtering or return all records.\n**Dependency chain**\n- Depends on the searchField property to determine which field the searchValue applies to.\n- Works alongside searchOperator to define how the searchValue is compared.\n- Influences the results returned by the RESTlet endpoint.\n**Technical details**\n- Typically passed as a string in the API request payload.\n- May require serialization or formatting based on the API specification.\n- Integrated into the NetSuite search query logic on the server side.\n- Subject to NetSuite’s search limitations and indexing capabilities."},"searchValue2":{"type":"object","description":"searchValue2 is an optional property used to specify the second value in a search criterion within the NetSuite RESTlet API. It is typically used in conjunction with search operators that require two values, such as \"between\" or \"not between,\" to define a range or a pair of comparison values.\n\n**Field behavior**\n- Represents the second operand or value in a search condition.\n- Used primarily with operators that require two values (e.g., \"between\", \"not between\").\n- Optional field; may be omitted if the operator only requires a single value.\n- Works alongside searchValue (the first value) to form a complete search criterion.\n\n**Implementation guidance**\n- Ensure that searchValue2 is provided only when the selected operator requires two values.\n- Validate the data type of searchValue2 to match the expected type for the field being searched (e.g., date, number, string).\n- When using range-based operators, searchValue2 should represent the upper bound or second boundary of the range.\n- If the operator does not require a second value, omit this property to avoid errors.\n\n**Examples**\n- For a date range search: searchValue = \"2023-01-01\", searchValue2 = \"2023-12-31\" with operator \"between\".\n- For a numeric range: searchValue = 100, searchValue2 = 200 with operator \"between\".\n- For a \"not between\" operator: searchValue = 50, searchValue2 = 100.\n\n**Important notes**\n- Providing searchValue2 without a compatible operator may result in an invalid search query.\n- The data type and format of searchValue2 must be consistent with searchValue and the field being queried.\n- This property is ignored if the operator only requires a single value.\n- Proper validation and error handling should be implemented when processing this field.\n\n**Dependency chain**\n- Dependent on the \"operator\" property within the same search criterion.\n- Works in conjunction with \"searchValue\" to define the search condition.\n- Part of the \"criteria\" array or object in the NetSuite RESTlet search request.\n\n**Technical details**\n- Data type varies depending on the field being searched (string, number, date, etc.).\n- Typically serialized as a JSON property in the RESTlet request payload.\n- Must conform to the expected format for the field and operator to avoid API errors.\n- Used internally by NetSuite to construct the appropriate"},"formula":{"type":"string","description":"formula: >\n  A string representing a custom formula used to define criteria for filtering or querying data within the NetSuite RESTlet API.\n  This formula allows users to specify complex conditions using NetSuite's formula syntax, enabling advanced and flexible data retrieval.\n  **Field behavior**\n  - Accepts a formula expression as a string that defines custom filtering logic.\n  - Used to create dynamic and complex criteria beyond standard field-value comparisons.\n  - Evaluated by the NetSuite backend to filter records according to the specified logic.\n  - Can incorporate NetSuite formula functions, operators, and field references.\n  **Implementation guidance**\n  - Ensure the formula syntax complies with NetSuite's formula language and supported functions.\n  - Validate the formula string before submission to avoid runtime errors.\n  - Use this field when standard criteria fields are insufficient for the required filtering.\n  - Combine with other criteria fields as needed to build comprehensive queries.\n  **Examples**\n  - \"CASE WHEN {status} = 'Open' THEN 1 ELSE 0 END = 1\"\n  - \"TO_DATE({createddate}) >= TO_DATE('2023-01-01')\"\n  - \"NVL({amount}, 0) > 1000\"\n  **Important notes**\n  - Incorrect or invalid formulas may cause the API request to fail or return errors.\n  - The formula must be compatible with the context of the query and the fields available.\n  - Performance may be impacted if complex formulas are used extensively.\n  - Formula evaluation is subject to NetSuite's formula engine capabilities and limitations.\n  **Dependency chain**\n  - Depends on the availability of fields referenced within the formula.\n  - Works in conjunction with other criteria properties in the request.\n  - Requires understanding of NetSuite's formula syntax and functions.\n  **Technical details**\n  - Data type: string.\n  - Supports NetSuite formula syntax including SQL-like expressions and functions.\n  - Evaluated server-side during the processing of the RESTlet request.\n  - Must be URL-encoded if included in query parameters of HTTP requests."},"_id":{"type":"object","description":"Unique identifier for the record within the NetSuite system.\n**Field behavior**\n- Serves as the primary key to uniquely identify a specific record.\n- Immutable once the record is created.\n- Used to retrieve, update, or delete the corresponding record.\n**Implementation guidance**\n- Must be a valid NetSuite internal ID format, typically a string or numeric value.\n- Should be provided when querying or manipulating a specific record.\n- Avoid altering this value to maintain data integrity.\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"abcde12345\"\n**Important notes**\n- This ID is assigned by NetSuite and should not be generated manually.\n- Ensure the ID corresponds to an existing record to avoid errors.\n- When used in criteria, it filters the dataset to the exact record matching this ID.\n**Dependency chain**\n- Dependent on the existence of the record in NetSuite.\n- Often used in conjunction with other criteria fields for precise querying.\n**Technical details**\n- Typically represented as a string or integer data type.\n- Used in RESTlet scripts as part of the criteria object to specify the target record.\n- May be included in URL parameters or request bodies depending on the API design."}},"description":"criteria: >\n  Defines the set of conditions or filters used to specify which records should be retrieved or affected by the NetSuite RESTlet operation. This property enables clients to precisely narrow down the dataset by applying one or more criteria based on record fields, comparison operators, and values, supporting complex logical combinations to tailor the query results.\n\n  **Field behavior**\n  - Accepts a structured object or an array representing one or multiple filtering conditions.\n  - Supports logical operators such as AND, OR, and nested groupings to combine multiple criteria flexibly.\n  - Each criterion typically includes a field name, an operator (e.g., equals, contains, greaterThan), and a value or set of values.\n  - Enables filtering on various data types including strings, numbers, dates, and booleans.\n  - Used to limit the scope of data returned or manipulated by the RESTlet to only those records that meet the specified conditions.\n  - When omitted or empty, the RESTlet may return all records or apply default filtering behavior as defined by the implementation.\n\n  **Implementation guidance**\n  - Validate the criteria structure rigorously to ensure it conforms to the expected schema before processing.\n  - Support nested criteria groups to allow complex and hierarchical filtering logic.\n  - Map criteria fields and operators accurately to corresponding NetSuite record fields and search operators, considering data types and operator compatibility.\n  - Handle empty or undefined criteria gracefully by returning all records or applying sensible default filters.\n  - Sanitize all input values to prevent injection attacks, malformed queries, or unexpected behavior.\n  - Provide clear error messages when criteria are invalid or unsupported.\n  - Optimize query performance by translating criteria into efficient NetSuite search queries.\n\n  **Examples**\n  - A single criterion filtering records where status equals \"Open\":\n    `{ \"field\": \"status\", \"operator\": \"equals\", \"value\": \"Open\" }`\n  - Multiple criteria combined with AND logic:\n    `[{\"field\": \"status\", \"operator\": \"equals\", \"value\": \"Open\"}, {\"field\": \"priority\", \"operator\": \"greaterThan\", \"value\": 2}]`\n  - Nested criteria combining OR and AND:\n    `{ \"operator\": \"OR\", \"criteria\": [ {\"field\": \"status\", \"operator\": \"equals\", \"value\": \"Open\"}, { \"operator\": \"AND\", \"criteria\": [ {\"field\": \"priority\", \"operator\": \"greaterThan\", \"value\":"},"columns":{"type":"object","properties":{"type":{"type":"array","items":{"type":"object"},"description":"The type property specifies the data type of the column in the NetSuite RESTlet response. It defines how the data in the column should be interpreted, validated, and handled by the client application to ensure accurate processing and display.\n\n**Field behavior**\n- Indicates the specific data type of the column (e.g., string, integer, date).\n- Determines the format, validation rules, and parsing logic applied to the column data.\n- Guides client applications in correctly interpreting and processing the data.\n- Influences data presentation, transformation, and serialization in the user interface or downstream systems.\n- Helps enforce data consistency and integrity across different components consuming the API.\n\n**Implementation guidance**\n- Use standardized data type names consistent with NetSuite’s native data types and conventions.\n- Ensure the specified type accurately reflects the actual data returned in the column to prevent parsing or runtime errors.\n- Support and validate common data types such as text (string), number (integer, float), date, datetime, boolean, and currency.\n- Validate the type value against a predefined, documented list of acceptable types to maintain consistency.\n- Clearly document any custom or extended data types if they are introduced beyond standard NetSuite types.\n- Consider locale and formatting standards (e.g., ISO 8601 for dates) when defining and interpreting types.\n\n**Examples**\n- \"string\" — for textual or alphanumeric data.\n- \"integer\" — for whole numeric values without decimals.\n- \"float\" — for numeric values with decimals (if supported).\n- \"date\" — for date values without time components, formatted as YYYY-MM-DD.\n- \"datetime\" — for combined date and time values, typically in ISO 8601 format.\n- \"boolean\" — for true/false or yes/no values.\n- \"currency\" — for monetary values, often including currency symbols or codes.\n\n**Important notes**\n- The type property is essential for ensuring data integrity and enabling correct client-side processing and validation.\n- Incorrect or mismatched type specifications can lead to data misinterpretation, parsing failures, or runtime errors.\n- Some data types require strict formatting standards (e.g., ISO 8601 for date and datetime) to ensure interoperability.\n- This property is typically mandatory for each column to guarantee predictable behavior.\n- Changes to the type property should be managed carefully to avoid breaking existing integrations.\n\n**Dependency chain**\n- Depends on the actual data returned by the NetSuite RESTlet for the column.\n- Influences"},"join":{"type":"string","description":"join: Specifies the join relationship to be used when retrieving or manipulating data through the NetSuite RESTlet API. This property defines how related records are linked together, enabling the inclusion of fields from associated records in the query or operation.\n\n**Field behavior**\n- Determines the type of join between the primary record and related records.\n- Enables access to fields from related records by specifying the join path.\n- Influences the scope and depth of data retrieved or affected by the API call.\n- Supports nested joins to traverse multiple levels of related records.\n\n**Implementation guidance**\n- Use valid join names as defined in the NetSuite schema for the specific record type.\n- Ensure the join relationship exists and is supported by the RESTlet endpoint.\n- Combine with column definitions to specify which fields from the joined records to include.\n- Validate join paths to prevent errors or unexpected results in data retrieval.\n- Consider performance implications when using multiple or complex joins.\n\n**Examples**\n- \"customer\" to join the transaction record with the related customer record.\n- \"item\" to join a sales order with the associated item records.\n- \"employee.manager\" to join an employee record with their manager's record.\n- \"vendor\" to join a purchase order with the vendor record.\n\n**Important notes**\n- Incorrect or unsupported join names will result in API errors.\n- Joins are case-sensitive and must match the exact join names defined in NetSuite.\n- Not all record types support all join relationships.\n- The join property works in conjunction with the columns property to specify which fields to retrieve.\n- Using joins may increase the complexity and execution time of the API call.\n\n**Dependency chain**\n- Depends on the base record type being queried or manipulated.\n- Works together with the columns property to define the data structure.\n- May depend on user permissions to access related records.\n- Influences the structure of the response payload.\n\n**Technical details**\n- Represented as a string indicating the join path.\n- Supports dot notation for nested joins (e.g., \"employee.manager\").\n- Used in RESTlet scripts to customize data retrieval.\n- Must conform to NetSuite's internal join naming conventions.\n- Typically included in the columns array objects to specify joined fields."},"summary":{"type":"object","properties":{"type":{"type":"string","description":"Type of the summary column, indicating the aggregation or calculation applied to the data in this column.\n**Field behavior**\n- Specifies the kind of summary operation performed on the column data, such as sum, count, average, minimum, or maximum.\n- Determines how the data in the column is aggregated or summarized in the report or query result.\n- Influences the output format and the meaning of the values in the summary column.\n**Implementation guidance**\n- Use predefined summary types supported by the NetSuite RESTlet API to ensure compatibility.\n- Validate the type value against allowed summary operations to prevent errors.\n- Ensure that the summary type is appropriate for the data type of the column (e.g., sum for numeric fields).\n- Document the summary type clearly to aid users in understanding the aggregation applied.\n**Examples**\n- \"SUM\" — calculates the total sum of the values in the column.\n- \"COUNT\" — counts the number of entries or records.\n- \"AVG\" — computes the average value.\n- \"MIN\" — finds the minimum value.\n- \"MAX\" — finds the maximum value.\n**Important notes**\n- The summary type must be supported by the underlying NetSuite system to function correctly.\n- Incorrect summary types may lead to errors or misleading data in reports.\n- Some summary types may not be applicable to certain data types (e.g., average on text fields).\n**Dependency chain**\n- Depends on the column data type to determine valid summary types.\n- Interacts with the overall report or query configuration to produce summarized results.\n- May affect downstream processing or display logic based on the summary output.\n**Technical details**\n- Typically represented as a string value corresponding to the summary operation.\n- Mapped internally to NetSuite’s summary functions in saved searches or reports.\n- Case-insensitive but recommended to use uppercase for consistency.\n- Must conform to the enumeration of allowed summary types defined by the API."},"enum":{"type":"object","description":"enum: >\n  Specifies the set of predefined constant values that the property can take, representing an enumeration.\n  **Field behavior**\n  - Defines a fixed list of allowed values for the property.\n  - Restricts the property's value to one of the enumerated options.\n  - Used to enforce data integrity and consistency.\n  **Implementation guidance**\n  - Enumerated values should be clearly defined and documented.\n  - Use meaningful and descriptive names for each enum value.\n  - Ensure the enum list is exhaustive for the intended use case.\n  - Validate input against the enum values to prevent invalid data.\n  **Examples**\n  - [\"Pending\", \"Approved\", \"Rejected\"]\n  - [\"Small\", \"Medium\", \"Large\"]\n  - [\"Red\", \"Green\", \"Blue\"]\n  **Important notes**\n  - Enum values are case-sensitive unless otherwise specified.\n  - Adding or removing enum values may impact backward compatibility.\n  - Enum should be used when the set of possible values is known and fixed.\n  **Dependency chain**\n  - Typically used in conjunction with the property type (e.g., string or integer).\n  - May influence validation logic and UI dropdown options.\n  **Technical details**\n  - Represented as an array of strings or numbers defining allowed values.\n  - Often implemented as a constant or static list in code.\n  - Used by client and server-side validation mechanisms."},"lowercase":{"type":"object","description":"A boolean property indicating whether the summary column values should be converted to lowercase.\n\n**Field behavior**\n- When set to true, all text values in the summary column are transformed to lowercase.\n- When set to false or omitted, the original casing of the summary column values is preserved.\n- Primarily affects string-type summary columns; non-string values remain unaffected.\n\n**Implementation guidance**\n- Use this property to normalize text data for consistent processing or comparison.\n- Ensure that the transformation to lowercase does not interfere with case-sensitive data requirements.\n- Apply this setting during data retrieval or before output formatting in the RESTlet response.\n\n**Examples**\n- `lowercase: true` — converts \"Example Text\" to \"example text\".\n- `lowercase: false` — retains \"Example Text\" as is.\n- Property omitted — defaults to no case transformation.\n\n**Important notes**\n- This property only affects the summary columns specified in the RESTlet response.\n- It does not modify the underlying data in NetSuite, only the output representation.\n- Use with caution when case sensitivity is important for downstream processing.\n\n**Dependency chain**\n- Depends on the presence of summary columns in the RESTlet response.\n- May interact with other formatting or transformation properties applied to columns.\n\n**Technical details**\n- Implemented as a boolean flag within the summary column configuration.\n- Transformation is applied at the data serialization stage before sending the response.\n- Compatible with string data types; other data types bypass this transformation."}},"description":"A brief textual summary providing an overview or key points related to the item or record represented by this property. This summary is intended to give users a quick understanding without needing to delve into detailed data.\n\n**Field behavior**\n- Contains concise, human-readable text summarizing the main aspects or highlights of the associated record.\n- Typically used in list views, reports, or previews where a quick reference is needed.\n- May be displayed in user interfaces to provide context or a snapshot of the data.\n- Can be optional or mandatory depending on the specific API implementation.\n\n**Implementation guidance**\n- Ensure the summary is clear, concise, and informative, avoiding overly technical jargon unless appropriate.\n- Limit the length to a reasonable number of characters to maintain readability in UI components.\n- Sanitize input to prevent injection of malicious content if the summary is user-generated.\n- Support localization if the API serves multi-lingual environments.\n- Update the summary whenever the underlying data it represents changes to keep it accurate.\n\n**Examples**\n- \"Quarterly sales report showing a 15% increase in revenue.\"\n- \"Customer profile summary including recent orders and contact info.\"\n- \"Inventory item overview highlighting stock levels and reorder status.\"\n- \"Project status summary indicating milestones achieved and pending tasks.\"\n\n**Important notes**\n- The summary should not contain sensitive or confidential information unless properly secured.\n- It is distinct from detailed descriptions or full data fields; it serves as a high-level overview.\n- Consistency in style and format across summaries improves user experience.\n- May be truncated in some UI contexts; design summaries to convey essential information upfront.\n\n**Dependency chain**\n- Often derived from or related to other detailed fields within the record.\n- May influence or be influenced by display components or reporting modules consuming this property.\n- Could be linked to user permissions determining visibility of summary content.\n\n**Technical details**\n- Data type is typically a string.\n- May have a maximum length constraint depending on API or database schema.\n- Stored and transmitted as plain text; consider encoding if special characters are used.\n- Accessible via the RESTlet API under the path `netsuite.restlet.columns.summary`."},"formula":{"type":"string","description":"formula: >\n  A string representing a custom formula used to calculate or derive values dynamically within the context of the NetSuite RESTlet columns. This formula can include field references, operators, and functions supported by NetSuite's formula syntax to perform computations or conditional logic on record data.\n\n  **Field behavior:**\n  - Accepts a formula expression as a string that defines how to compute the column's value.\n  - Can reference other fields, constants, and use NetSuite-supported functions and operators.\n  - Evaluated at runtime to produce dynamic results based on the current record data.\n  - Used primarily in saved searches, reports, or RESTlet responses to customize output.\n  \n  **Implementation guidance:**\n  - Ensure the formula syntax complies with NetSuite's formula language and supported functions.\n  - Validate the formula string to prevent errors during execution.\n  - Use field IDs or aliases correctly within the formula to reference data fields.\n  - Test formulas thoroughly in NetSuite UI before deploying via RESTlet to ensure correctness.\n  - Consider performance implications of complex formulas on large datasets.\n  \n  **Examples:**\n  - \"CASE WHEN {status} = 'Open' THEN 1 ELSE 0 END\" — returns 1 if status is Open, else 0.\n  - \"NVL({amount}, 0) * 0.1\" — calculates 10% of the amount, treating null as zero.\n  - \"TO_CHAR({trandate}, 'YYYY-MM-DD')\" — formats the transaction date as a string.\n  \n  **Important notes:**\n  - The formula must be compatible with the context in which it is used (e.g., search column, RESTlet).\n  - Incorrect formulas can cause runtime errors or unexpected results.\n  - Some functions or operators may not be supported depending on the NetSuite version or API context.\n  - Formula evaluation respects user permissions and data visibility.\n  \n  **Dependency chain:**\n  - Depends on the availability of referenced fields within the record or search context.\n  - Relies on NetSuite's formula parsing and evaluation engine.\n  - Interacts with the RESTlet execution environment to produce output.\n  \n  **Technical details:**\n  - Data type: string containing a formula expression.\n  - Supports NetSuite formula syntax including SQL-like CASE statements, arithmetic operations, and built-in functions.\n  - Evaluated server-side during RESTlet execution or saved search processing.\n  - Must be URL-"},"label":{"type":"string","description":"label: |\n  The display name or title of the column as it appears in the user interface or reports.\n  **Field behavior**\n  - Represents the human-readable name for a column in a dataset or report.\n  - Used to identify the column in UI elements such as tables, forms, or export files.\n  - Should be concise yet descriptive enough to convey the column’s content.\n  **Implementation guidance**\n  - Ensure the label is localized if the application supports multiple languages.\n  - Avoid using technical jargon; prefer user-friendly terminology.\n  - Keep the label length reasonable to prevent UI truncation.\n  - Update the label consistently when the underlying data or purpose changes.\n  **Examples**\n  - \"Customer Name\"\n  - \"Invoice Date\"\n  - \"Total Amount\"\n  - \"Status\"\n  **Important notes**\n  - The label does not affect the data or the column’s functionality; it is purely for display.\n  - Changing the label does not impact data processing or storage.\n  - Labels should be unique within the same context to avoid confusion.\n  **Dependency chain**\n  - Depends on the column definition within the dataset or report configuration.\n  - May be linked to localization resources if internationalization is supported.\n  **Technical details**\n  - Typically a string data type.\n  - May support Unicode characters for internationalization.\n  - Stored as metadata associated with the column definition in the system."},"sort":{"type":"boolean","description":"sort: >\n  Specifies the sorting order for the column values in the query results.\n  **Field behavior**\n  - Determines the order in which the data is returned based on the column values.\n  - Accepts values that indicate ascending or descending order.\n  - Influences how the dataset is organized before being returned by the API.\n  **Implementation guidance**\n  - Use standardized values such as \"asc\" for ascending and \"desc\" for descending.\n  - Ensure the sort parameter corresponds to a valid column in the dataset.\n  - Multiple sort parameters may be supported to define secondary sorting criteria.\n  - Validate the sort value to prevent errors or unexpected behavior.\n  **Examples**\n  - \"asc\" to sort the column values in ascending order.\n  - \"desc\" to sort the column values in descending order.\n  **Important notes**\n  - Sorting can impact performance, especially on large datasets.\n  - If no sort parameter is provided, the default sorting behavior of the API applies.\n  - Sorting is case-insensitive.\n  **Dependency chain**\n  - Depends on the column specified in the query or request.\n  - May interact with pagination parameters to determine the final data output.\n  **Technical details**\n  - Typically implemented as a string value in the API request.\n  - May be part of a query string or request body depending on the API design.\n  - Sorting logic is handled server-side before data is returned to the client."}},"description":"columns: >\n  Specifies the set of columns (fields) to be retrieved or manipulated in the NetSuite RESTlet operation. This property defines which specific fields from the records should be included in the response or used during processing, enabling precise control over the data returned or affected. By selecting only relevant columns, it helps optimize performance and reduce payload size, ensuring efficient data handling tailored to the operation’s requirements.\n\n  **Field behavior**\n  - Determines the exact fields (columns) to be included in data retrieval, update, or manipulation operations.\n  - Supports specifying multiple columns to customize the dataset returned or processed.\n  - Limits the data payload by including only the specified columns, improving performance and reducing bandwidth.\n  - Influences the structure, content, and size of the response from the RESTlet.\n  - If omitted, defaults to retrieving all available columns for the target record type, which may impact performance.\n  - Columns specified must be valid and accessible for the target record type to avoid errors.\n\n  **Implementation guidance**\n  - Accepts an array or list of column identifiers, which can be simple strings or objects with detailed specifications (e.g., `{ name: \"fieldname\" }`).\n  - Column identifiers should correspond exactly to valid NetSuite record field names or internal IDs.\n  - Validate column names against the target record schema before execution to prevent runtime errors.\n  - Use this property to optimize RESTlet calls by limiting data to only necessary fields, especially in large datasets.\n  - When specifying complex columns (e.g., joined fields or formula fields), ensure the correct syntax and structure are used.\n  - Consider the permissions and roles associated with the RESTlet user to ensure access to the specified columns.\n\n  **Examples**\n  - `[\"internalid\", \"entityid\", \"email\"]` — retrieves basic identifying and contact fields.\n  - `[ { name: \"internalid\" }, { name: \"entityid\" }, { name: \"email\" } ]` — object notation for specifying columns.\n  - `[\"tranid\", \"amount\", \"status\"]` — retrieves transaction-specific fields.\n  - `[ { name: \"custbody_custom_field\" }, { name: \"createddate\" } ]` — includes custom and system fields.\n  - `[\"item\", \"quantity\", \"rate\"]` — fields relevant to item records or line items.\n\n  **Important notes**\n  - Omitting the `columns` property typically"},"markExportedBatchSize":{"type":"object","properties":{"type":{"type":"number","description":"type: >\n  Specifies the data type of the `markExportedBatchSize` property, defining the kind of value it accepts or represents. This property is crucial for ensuring that the batch size value is correctly interpreted, validated, and processed by the API. It dictates how the value is serialized and deserialized during API communication, thereby maintaining data integrity and consistency across different system components.\n  **Field behavior**\n  - Determines the expected format and constraints of the `markExportedBatchSize` value.\n  - Influences validation rules applied to the batch size input to prevent invalid data.\n  - Guides serialization and deserialization processes for accurate data exchange.\n  - Ensures compatibility with client and server-side processing logic.\n  **Implementation guidance**\n  - Must be assigned a valid and recognized data type within the API schema, such as \"integer\" or \"string\".\n  - Should align precisely with the nature of the batch size value to avoid type mismatches.\n  - Implement strict validation checks to confirm the value conforms to the specified type before processing.\n  - Consider the implications of the chosen type on downstream processing and storage.\n  **Examples**\n  - `\"integer\"` — indicating the batch size is represented as a whole number.\n  - `\"string\"` — if the batch size is provided as a textual representation.\n  - `\"number\"` — for numeric values that may include decimals (less common for batch sizes).\n  **Important notes**\n  - The `type` must consistently reflect the actual data format of `markExportedBatchSize` to prevent runtime errors.\n  - Mismatched or incorrect type declarations can cause API failures, data corruption, or unexpected behavior.\n  - Changes to this property’s type should be carefully managed to maintain backward compatibility.\n  **Dependency chain**\n  - Directly defines the data handling of the `markExportedBatchSize` property.\n  - Affects validation logic and error handling in API endpoints related to batch processing.\n  - May impact client applications that consume or provide this property’s value.\n  **Technical details**\n  - Corresponds to standard JSON data types such as integer, string, boolean, etc.\n  - Utilized by the API framework to enforce type safety, ensuring data integrity during request and response cycles.\n  - Plays a role in schema validation tools and automated documentation generation.\n  - Influences serialization libraries in encoding and decoding the property value correctly."},"cLocked":{"type":"object","description":"cLocked indicates whether the batch size setting for marking exports is locked, preventing any modifications by users or automated processes. This property serves as a control mechanism to safeguard critical configuration parameters related to export batch processing.\n\n**Field behavior**\n- Represents a boolean flag that determines if the batch size configuration is immutable.\n- When set to true, the batch size cannot be altered via the user interface, API calls, or automated scripts.\n- When set to false, the batch size remains configurable and can be adjusted as operational needs evolve.\n- Changes to this flag directly affect the ability to update the batch size setting.\n\n**Implementation guidance**\n- Utilize this flag to enforce configuration stability and prevent accidental or unauthorized changes to batch size settings.\n- Validate this flag before processing any update requests to the batch size to ensure compliance.\n- Typically managed by system administrators or during initial system setup to lock down critical parameters.\n- Incorporate audit logging when this flag is changed to maintain traceability.\n- Consider integrating with role-based access controls to restrict who can toggle this flag.\n\n**Examples**\n- cLocked: true — The batch size setting is locked, disallowing any modifications.\n- cLocked: false — The batch size setting is unlocked and can be updated as needed.\n\n**Important notes**\n- Locking the batch size helps maintain consistent export throughput and prevents performance degradation caused by unintended configuration changes.\n- Modifications to this flag should be performed cautiously and ideally under change management procedures.\n- This property is only applicable in environments where batch size configuration for marking exports is relevant.\n- Ensure that dependent systems or processes respect this lock to avoid configuration conflicts.\n\n**Dependency chain**\n- Dependent on the presence of the markExportedBatchSize configuration object within the system.\n- Interacts with user permission settings and roles that govern configuration management capabilities.\n- May affect downstream export processing workflows that rely on batch size parameters.\n\n**Technical details**\n- Data type: Boolean.\n- Default value is false, indicating the batch size is unlocked unless explicitly locked.\n- Persisted as part of the NetSuite.restlet.markExportedBatchSize configuration object.\n- Changes to this property should trigger validation and possibly system notifications to administrators."},"min":{"type":"object","description":"Minimum number of records to process in a single batch during the markExported operation in the NetSuite RESTlet integration. This property sets the lower boundary for batch sizes, ensuring that each batch contains at least this number of records before processing begins. It plays a crucial role in balancing processing efficiency and system resource utilization by controlling the granularity of batch operations.\n\n**Field behavior**\n- Defines the lower limit for the batch size when processing records in the markExported operation.\n- Ensures that each batch contains at least this minimum number of records before processing.\n- Helps optimize performance by preventing excessively small batches that could increase overhead.\n- Works in conjunction with the 'max' batch size to establish a valid range for batch processing.\n- Influences how the system partitions large datasets into manageable chunks for processing.\n\n**Implementation guidance**\n- Choose a value based on system capabilities, expected record complexity, and API rate limits to avoid timeouts or throttling.\n- Ensure this value is a positive integer greater than zero.\n- Must be less than or equal to the corresponding 'max' batch size to maintain logical consistency.\n- Test different values to find an optimal balance between processing speed and resource consumption.\n- Consider the impact on downstream systems and network latency when setting this value.\n\n**Examples**\n- 10 (process at least 10 records per batch)\n- 50 (process at least 50 records per batch)\n- 100 (process at least 100 records per batch for high-throughput scenarios)\n\n**Important notes**\n- Setting this value too low may lead to inefficient processing due to increased overhead from handling many small batches.\n- Setting this value too high may cause processing delays, timeouts, or exceed API rate limits.\n- Always use in conjunction with the 'max' batch size to define a valid and effective batch size range.\n- Changes to this value should be tested in a staging environment before production deployment to assess impact.\n\n**Dependency chain**\n- Directly related to 'netsuite.restlet.markExportedBatchSize.max', which defines the upper limit of batch size.\n- Utilized within the batch processing logic of the markExported operation in the NetSuite RESTlet integration.\n- Influences and is influenced by system performance parameters and API constraints.\n\n**Technical details**\n- Data type: Integer\n- Must be a positive integer greater than zero\n- Should be validated at configuration time to ensure it does not exceed the 'max' batch size"},"max":{"type":"object","description":"Maximum number of records to process in a single batch during the markExported operation, defining the upper limit for batch size to balance performance and resource utilization effectively.\n\n**Field behavior**\n- Specifies the maximum count of records processed in one batch during the markExported operation.\n- Controls the batch size to optimize throughput while preventing system overload.\n- Helps manage memory usage and processing time by limiting batch volume.\n- Directly affects the frequency and size of API calls or processing cycles.\n\n**Implementation guidance**\n- Determine an optimal value based on system capacity, performance benchmarks, and typical workload.\n- Ensure the value complies with any API or platform-imposed batch size limits.\n- Consider network conditions, processing latency, and error handling when setting the batch size.\n- Validate that the input is a positive integer and handle invalid values gracefully.\n- Adjust dynamically if possible, based on runtime metrics or error feedback.\n\n**Examples**\n- 1000: Processes up to 1000 records per batch, suitable for balanced performance.\n- 500: Smaller batch size for environments with limited resources or higher reliability needs.\n- 2000: Larger batch size for high-throughput scenarios where system resources allow.\n- 50: Very small batch size for testing or debugging purposes.\n\n**Important notes**\n- Excessively high values may lead to timeouts, memory exhaustion, or degraded system responsiveness.\n- Very low values can increase overhead due to more frequent batch processing cycles.\n- This parameter is critical for tuning the performance and stability of the markExported operation.\n- Changes to this value should be tested in a controlled environment before production deployment.\n\n**Dependency chain**\n- Integral to the batch processing logic within the markExported operation.\n- Interacts with system-level batch size constraints and API rate limits.\n- Influences how records are chunked and iterated during export marking.\n- May affect downstream processing components that consume batch outputs.\n\n**Technical details**\n- Must be an integer value greater than zero.\n- Typically configured via API request parameters or system configuration files.\n- Should be compatible with the data processing framework and any middleware handling batch operations.\n- May require synchronization with other batch-related settings to ensure consistency."}},"description":"markExportedBatchSize: The number of records to process in each batch when marking records as exported in NetSuite via the RESTlet API. This setting controls how many records are updated in a single API call to optimize performance and resource usage during the export marking process.\n\n**Field behavior**\n- Determines the size of each batch of records to be marked as exported in NetSuite.\n- Controls the number of records processed per RESTlet API call for export status updates.\n- Directly impacts the throughput and efficiency of the export marking operation.\n- Influences the balance between processing speed and system resource consumption.\n- Helps manage API rate limits by controlling the volume of records processed per request.\n\n**Implementation guidance**\n- Select a batch size that balances efficient processing with system stability and API constraints.\n- Avoid excessively large batch sizes to prevent API timeouts, memory exhaustion, or throttling.\n- Consider the typical volume of records to be exported and the performance characteristics of your NetSuite environment.\n- Test various batch sizes under realistic load conditions to identify the optimal value.\n- Monitor API response times and error rates to adjust the batch size dynamically if needed.\n- Ensure compatibility with any rate limiting or concurrency restrictions imposed by the NetSuite RESTlet API.\n\n**Examples**\n- Setting `markExportedBatchSize` to 100 processes 100 records per batch, suitable for moderate workloads.\n- Using a batch size of 500 may be appropriate for high-volume exports on systems with robust resources.\n- A smaller batch size like 50 can help avoid API throttling or timeouts in environments with limited resources or strict rate limits.\n- Adjusting the batch size to 200 after observing API latency improvements the overall export marking throughput.\n\n**Important notes**\n- The batch size setting directly affects the speed, reliability, and resource utilization of marking records as exported.\n- Incorrect batch size configurations can cause partial updates, failed API calls, or increased processing times.\n- This property is specific to the RESTlet-based integration with NetSuite and does not apply to other export mechanisms.\n- Changes to this setting should be tested thoroughly to avoid unintended disruptions in the export workflow.\n- Consider the impact on downstream processes that depend on timely and accurate export status updates.\n\n**Dependency chain**\n- Depends on the RESTlet API endpoint responsible for marking records as exported in NetSuite.\n- Influenced by NetSuite API rate limits, timeout settings, and system performance characteristics.\n- Works in conjunction with other export configuration parameters"},"TODO":{"type":"object","description":"TODO: A placeholder property used to indicate tasks, features, or sections within the NetSuite RESTlet integration that require implementation, completion, or further development. This property functions as a clear marker for developers and project managers to identify areas that are pending work, ensuring that these tasks are tracked and addressed before finalizing the API. It is not intended to hold any functional data or be part of the production API contract until fully implemented.\n\n**Field behavior**\n- Serves as a temporary indicator for incomplete, pending, or planned tasks within the API schema.\n- Does not contain operational data or affect API functionality until properly defined and implemented.\n- Helps track development progress and highlight areas needing attention during the development lifecycle.\n- Should be removed or replaced with finalized implementations once the associated task is completed.\n- May be used to generate reports or dashboards reflecting outstanding development work.\n\n**Implementation guidance**\n- Utilize the TODO property to explicitly flag API sections requiring further coding, configuration, or review.\n- Accompany TODO entries with detailed comments or references to issue tracking systems (e.g., JIRA, GitHub Issues) for clarity and traceability.\n- Establish regular review cycles to update, resolve, or remove TODO properties to maintain an accurate representation of development status.\n- Avoid deploying TODO properties in production environments to prevent confusion, incomplete features, or potential runtime errors.\n- Integrate TODO tracking with project management workflows to ensure timely resolution.\n\n**Examples**\n- TODO: Implement OAuth 2.0 authentication mechanism for RESTlet endpoints.\n- TODO: Add comprehensive validation rules for input parameters to ensure data integrity.\n- TODO: Complete error handling and logging for data retrieval failures.\n- TODO: Optimize response payload size for improved performance.\n- TODO: Integrate unit tests covering all new RESTlet functionalities.\n\n**Important notes**\n- The presence of TODO properties signifies incomplete or provisional functionality and should not be interpreted as finalized API features.\n- Unresolved TODO items can lead to partial implementations, unexpected behavior, or runtime errors if not addressed before release.\n- Effective management and timely resolution of TODO properties are critical for maintaining code quality, project timelines, and overall system stability.\n- TODO properties should be clearly documented and communicated within the development team to avoid oversight.\n\n**Dependency chain**\n- TODO properties may depend on other modules, services, or API components that are under development or pending integration.\n- Often linked to external project management or issue tracking tools for assignment, prioritization, and progress monitoring.\n- The"},"hooks":{"type":"object","properties":{"batchSize":{"type":"number","description":"batchSize specifies the number of records or items to be processed in a single batch during the execution of the NetSuite RESTlet hook. This parameter helps control the workload size for each batch operation, optimizing performance and resource utilization by balancing processing efficiency and system constraints.\n\n**Field behavior**\n- Determines the maximum number of records or items processed in one batch cycle.\n- Directly influences the frequency and duration of batch processing operations.\n- Helps manage memory consumption and processing time by limiting the batch workload.\n- Affects overall throughput and latency of batch operations, impacting system responsiveness.\n- Controls how data is segmented and processed in discrete units during RESTlet execution.\n\n**Implementation guidance**\n- Configure batchSize based on the system’s processing capacity, expected data volume, and performance goals.\n- Use smaller batch sizes in environments with limited resources or strict execution time limits to prevent timeouts.\n- Larger batch sizes can improve throughput by reducing the number of batch cycles but may increase individual batch processing time and risk of hitting governance limits.\n- Always validate that batchSize is a positive integer greater than zero to ensure proper operation.\n- Take into account NetSuite API governance limits, such as usage units and execution time, when determining batchSize.\n- Monitor system performance and adjust batchSize dynamically if possible to optimize processing efficiency.\n- Ensure batchSize aligns with other batch-related configurations to maintain consistency and predictable behavior.\n\n**Examples**\n- batchSize: 100 — processes 100 records per batch, balancing throughput and resource use.\n- batchSize: 500 — processes 500 records per batch for higher throughput in robust environments.\n- batchSize: 10 — processes 10 records per batch for fine-grained control and minimal resource impact.\n- batchSize: 1 — processes records individually, useful for debugging or very resource-sensitive scenarios.\n\n**Important notes**\n- Excessively high batchSize values may cause processing timeouts, exceed NetSuite governance limits, or lead to memory exhaustion.\n- Very low batchSize values can result in inefficient processing due to increased overhead and more frequent batch invocations.\n- The optimal batchSize is context-dependent and should be determined through testing and monitoring.\n- batchSize should be consistent with other batch processing parameters to avoid conflicts or unexpected behavior.\n- Changes to batchSize may require adjustments in error handling and retry logic to accommodate different batch sizes.\n\n**Dependency chain**\n- Depends on the batch processing logic implemented within the RESTlet hook.\n- Influences and is influenced by"},"preSend":{"type":"object","properties":{"fileInternalId":{"type":"string","description":"fileInternalId: The unique internal identifier assigned to a file within the NetSuite system. This identifier is used to precisely reference and manipulate a specific file during API operations, particularly within pre-send processing hooks in RESTlets. It ensures accurate targeting of file resources by uniquely identifying files stored in the NetSuite file cabinet.\n\n**Field behavior**\n- Represents a unique numeric or alphanumeric identifier assigned by NetSuite to each file.\n- Used to retrieve, update, or reference a file during the preSend hook execution.\n- Must correspond to an existing file within the NetSuite file cabinet.\n- Immutable throughout the file’s lifecycle; remains constant unless the file is deleted and recreated.\n- Serves as a key reference for file-related operations in automated workflows and integrations.\n\n**Implementation guidance**\n- Always validate that the fileInternalId exists and is accessible before performing operations.\n- Use this ID to fetch file metadata, content, or perform updates within the preSend hook.\n- Implement error handling to manage cases where the fileInternalId does not correspond to a valid or accessible file.\n- Ensure that the executing user or integration has the necessary permissions to access the file referenced by this ID.\n- Avoid hardcoding this ID; retrieve dynamically when possible to maintain flexibility and accuracy.\n\n**Examples**\n- 12345\n- \"67890\"\n- \"file_98765\"\n\n**Important notes**\n- The fileInternalId is specific to each NetSuite account and environment; it is not globally unique across different accounts.\n- Do not expose this identifier publicly, as it may reveal sensitive internal system details.\n- Modifications to the file’s name, location, or metadata do not affect the internal ID.\n- This ID is essential for linking files reliably in automated processes, integrations, and RESTlet hooks.\n- Deleting and recreating a file will result in a new fileInternalId.\n\n**Dependency chain**\n- Depends on the existence of the file within the NetSuite file cabinet.\n- Requires appropriate permissions to access or manipulate the file.\n- Utilized within preSend hooks to reference files accurately during API operations.\n\n**Technical details**\n- Typically a numeric or alphanumeric string assigned by NetSuite upon file creation.\n- Stored internally within NetSuite’s database as the primary key for file records.\n- Used as a parameter in RESTlet API calls to identify and operate on specific files.\n- Immutable identifier that does not change unless the file is deleted and recreated.\n- Integral to"},"function":{"type":"string","description":"function: >\n  Specifies the name of the custom function to be invoked during the preSend hook phase in the NetSuite RESTlet integration. This function enables developers to implement custom logic for processing or modifying the request payload immediately before it is dispatched to the NetSuite RESTlet endpoint. It serves as a critical extension point for tailoring request data, adding headers, sanitizing inputs, or performing any preparatory steps necessary to meet integration requirements.\n\n  **Field behavior**\n  - Identifies the exact function to execute during the preSend hook phase.\n  - Allows customization and transformation of the outgoing request payload or context.\n  - The specified function is called synchronously or asynchronously depending on implementation support.\n  - Modifications made by this function directly affect the data sent to the NetSuite RESTlet.\n  - Must reference a valid, accessible function within the integration’s runtime environment.\n\n  **Implementation guidance**\n  - Confirm that the function name matches a defined and exported function within the integration codebase.\n  - The function should accept the current request payload or context as input and return the modified payload or context.\n  - Implement robust error handling within the function to avoid unhandled exceptions that could disrupt the request flow.\n  - Optimize the function for performance to minimize latency in request processing.\n  - If asynchronous operations are supported, ensure proper handling of promises or callbacks.\n  - Document the function’s behavior clearly to facilitate maintenance and future updates.\n\n  **Examples**\n  - \"sanitizePayload\" — cleans and validates request data before sending.\n  - \"addAuthenticationHeaders\" — injects necessary authentication tokens or headers.\n  - \"transformRequestData\" — restructures or enriches the payload to match API expectations.\n  - \"logRequestDetails\" — captures request metadata for auditing or debugging purposes.\n\n  **Important notes**\n  - The function must be correctly implemented and accessible; otherwise, runtime errors will occur.\n  - This hook executes immediately before the request is sent, so any changes here directly impact the outgoing data.\n  - Ensure that the function’s side effects do not unintentionally alter unrelated parts of the request or integration state.\n  - If asynchronous processing is used, verify that the integration framework supports it to avoid unexpected behavior.\n  - Testing the function thoroughly is critical to ensure reliable integration behavior.\n\n  **Dependency chain**\n  - Requires the preSend hook to be enabled and properly configured in the integration settings.\n  - Depends on the presence of the named"},"configuration":{"type":"object","description":"configuration: >\n  An object containing configuration settings that influence the behavior of the preSend hook in the NetSuite RESTlet integration. This object serves as a centralized control point for customizing how requests are processed and modified before being sent to the NetSuite RESTlet endpoint. It can include a variety of parameters such as authentication credentials, logging preferences, request modification flags, timeout settings, retry policies, and feature toggles that tailor the preSend hook’s operation to specific integration needs.\n  **Field behavior**\n  - Holds key-value pairs that define how the preSend hook processes and modifies outgoing requests.\n  - Can include settings such as authentication parameters (e.g., tokens, API keys), request modification flags (e.g., header adjustments), logging options (e.g., enable/disable logging), timeout durations, retry counts, and feature toggles.\n  - Is accessed and potentially updated dynamically during the execution of the preSend hook to adapt request handling based on current context or conditions.\n  - Influences the flow and outcome of the preSend hook, potentially altering request payloads, headers, or other metadata before transmission.\n  **Implementation guidance**\n  - Define clear, descriptive, and consistent keys within the configuration object to avoid ambiguity and ensure maintainability.\n  - Validate all configuration values rigorously before applying them to prevent runtime errors or unexpected behavior.\n  - Use this object to centralize control over preSend hook behavior, enabling easier updates, debugging, and feature management.\n  - Document all possible configuration options, their expected data types, default values, and their specific effects on the preSend hook’s operation.\n  - Ensure sensitive information within the configuration (e.g., authentication tokens) is handled securely, following best practices for encryption and access control.\n  - Consider versioning the configuration schema if multiple versions of the preSend hook or integration exist.\n  **Examples**\n  - `{ \"enableLogging\": true, \"authToken\": \"abc123\", \"modifyHeaders\": false }`\n  - `{ \"retryCount\": 3, \"timeout\": 5000 }`\n  - `{ \"useNonProduction\": true, \"customHeader\": \"X-Custom-Value\" }`\n  - `{ \"authenticationType\": \"OAuth2\", \"refreshToken\": \"xyz789\", \"logLevel\": \"verbose\" }`\n  - `{ \"enableCaching\": false, \"maxRetries\": 5, \"requestPriority\": \"high\" }`\n  **Important**"}},"description":"preSend is a hook function that is executed immediately before a RESTlet sends a response back to the client. It allows for last-minute modifications or logging of the response data, enabling customization of the output or performing additional processing steps prior to transmission. This hook provides a critical interception point to ensure the response adheres to business rules, compliance requirements, or client-specific formatting before it leaves the server.\n\n**Field behavior**\n- Invoked right before the RESTlet response is sent to the client.\n- Receives the response data as input and can modify it.\n- Can be used to log, audit, or transform the response payload.\n- Should return the final response object to be sent.\n- Supports both synchronous and asynchronous execution depending on the implementation.\n- Any changes made here directly impact the final output received by the client.\n\n**Implementation guidance**\n- Implement as a synchronous or asynchronous function depending on the environment and use case.\n- Ensure any modifications maintain the expected response format and data integrity.\n- Avoid long-running or blocking operations to prevent delaying the response delivery.\n- Handle errors gracefully within the hook to prevent disrupting the overall RESTlet response flow.\n- Validate the modified response to ensure it complies with API schema and client expectations.\n- Use this hook to enforce security measures such as masking sensitive data or adding audit trails.\n\n**Examples**\n- Adding a timestamp or metadata (e.g., request ID, processing duration) to the response object.\n- Masking or removing sensitive information (e.g., personal identifiers, confidential fields) from the response.\n- Logging response details for auditing or debugging purposes.\n- Transforming response data structure or formatting to match client-specific requirements.\n- Injecting additional headers or status information into the response payload.\n\n**Important notes**\n- This hook runs after all business logic but before the response is finalized and sent.\n- Modifications here directly affect what the client ultimately receives.\n- Errors thrown in this hook may cause the RESTlet to fail or return an error response.\n- Use this hook to enforce response-level policies, compliance, or data governance rules.\n- Avoid introducing side effects that could alter the idempotency or consistency of the response.\n- Testing this hook thoroughly is critical to ensure it does not unintentionally break client integrations.\n\n**Dependency chain**\n- Triggered after the main RESTlet processing logic completes and the response object is prepared.\n- Precedes the actual sending of the HTTP response to the client.\n- May depend on prior hooks or"}},"description":"hooks: >\n  An array of hook definitions that specify custom functions to be executed at various points during the lifecycle of the RESTlet script in NetSuite. These hooks enable developers to inject additional logic before or after standard processing events, allowing for extensive customization and extension of the RESTlet's behavior to meet specific business requirements.\n\n  **Field behavior**\n  - Defines one or more hooks that trigger custom code execution at designated lifecycle events.\n  - Hooks can be configured to run at standard lifecycle events such as beforeLoad, beforeSubmit, afterSubmit, or at custom-defined events tailored to specific needs.\n  - Each hook entry typically includes the event name, the callback function to execute, and optional parameters or context information.\n  - Supports both synchronous and asynchronous execution modes depending on the hook type and implementation context.\n  - Hooks execute in the order they are defined, allowing for controlled sequencing of custom logic.\n  - Hooks can modify input data, perform validations, log information, or alter output responses as needed.\n\n  **Implementation guidance**\n  - Ensure that each hook function is properly defined, accessible, and tested within the RESTlet script context to avoid runtime failures.\n  - Validate hook event names against the list of supported lifecycle events to prevent misconfiguration and errors.\n  - Use hooks to encapsulate reusable business logic, enforce data integrity, or integrate with external systems and services.\n  - Implement robust error handling within hook functions to prevent exceptions from disrupting the main RESTlet processing flow.\n  - Document each hook’s purpose, expected inputs, outputs, and side effects clearly to facilitate maintainability and future enhancements.\n  - Consider performance implications of hooks, especially those performing asynchronous operations or external calls, to maintain RESTlet responsiveness.\n  - When multiple hooks are defined for the same event, design them to avoid conflicts and ensure predictable outcomes.\n\n  **Examples**\n  - Defining a hook to validate and sanitize input data before processing a RESTlet request.\n  - Adding a hook to log detailed request and response information after the RESTlet completes execution for auditing purposes.\n  - Using a hook to modify or enrich the response payload dynamically before it is returned to the client application.\n  - Implementing a hook to trigger notifications or update related records asynchronously after data submission.\n  - Creating a custom hook event to perform additional security checks beyond standard validation.\n\n  **Important notes**\n  - Improper use or misconfiguration of hooks can lead to unexpected behavior, performance degradation, or runtime errors."},"cLocked":{"type":"object","description":"cLocked indicates whether the record is locked, preventing any modifications to its data.\n**Field behavior**\n- Represents the lock status of a record within the system.\n- When set to true, the record is locked and cannot be edited or updated.\n- When set to false, the record is unlocked and available for modifications.\n- Typically used to control concurrent access and maintain data integrity.\n**Implementation guidance**\n- Should be checked before performing update or delete operations on the record.\n- Setting this field to true should trigger UI or API restrictions on editing.\n- Ensure that only authorized users or processes can change the lock status.\n- Use this field to prevent race conditions or accidental data overwrites.\n**Examples**\n- cLocked: true — The record is locked and read-only.\n- cLocked: false — The record is unlocked and editable.\n**Important notes**\n- Locking a record does not necessarily prevent read access; it only restricts modifications.\n- The lock status may be temporary or permanent depending on business rules.\n- Changes to this field might require audit logging for compliance.\n**Dependency chain**\n- May depend on user permissions or roles to set or clear the lock.\n- Could be related to workflow states or approval processes that enforce locking.\n**Technical details**\n- Data type: Boolean.\n- Default value is typically false (unlocked).\n- Stored as a flag in the record metadata or status fields.\n- Changes to cLocked should be atomic to avoid inconsistent states."}},"description":"restlet: The identifier or URL of the NetSuite Restlet script to be invoked for performing custom server-side logic or data processing within the NetSuite environment. This property specifies which Restlet endpoint the integration or application should call to execute specific business logic, automate workflows, or retrieve and manipulate data dynamically. It can be represented as an internal script ID, a relative URL path, or a full external URL depending on the integration scenario and access method.\n\n**Field behavior**\n- Defines the specific target Restlet script or endpoint for API calls within the NetSuite environment.\n- Routes requests to custom server-side scripts developed using NetSuite’s SuiteScript framework.\n- Enables execution of tailored business processes, data validations, transformations, or integrations.\n- Supports various HTTP methods such as GET, POST, PUT, and DELETE depending on the Restlet’s implementation.\n- Can be specified as a script ID, a relative URL path, or a fully qualified URL based on deployment and access context.\n- Acts as the primary entry point for invoking custom logic that extends or complements standard NetSuite functionality.\n\n**Implementation guidance**\n- Confirm that the Restlet script is properly deployed, enabled, and accessible within the target NetSuite account.\n- Use the internal script ID format (e.g., \"customscript_my_restlet\") when calling via SuiteScript or internal APIs.\n- Use the relative URL path (e.g., \"/app/site/hosting/restlet.nl?script=123&deploy=1\") or full URL for external integrations or REST clients.\n- Verify that the Restlet supports the required HTTP methods and handles input/output data formats correctly (JSON, XML, etc.).\n- Secure the Restlet endpoint by implementing authentication mechanisms such as OAuth 2.0, token-based authentication, or NetSuite session credentials.\n- Implement robust error handling and retry logic to manage scenarios where the Restlet is unavailable or returns errors.\n- Test the Restlet thoroughly in a non-production environment before deploying to production to ensure expected behavior and security compliance.\n\n**Examples**\n- \"customscript_my_restlet\" (internal script ID used in SuiteScript calls)\n- \"/app/site/hosting/restlet.nl?script=123&deploy=1\" (relative URL for REST calls within NetSuite)\n- \"https://rest.netsuite.com/app/site/hosting/restlet.nl?script=456&deploy=2\" (full external URL for third-party integrations)\n- \"customscript_sales_order_processor\" (a Rest"},"distributed":{"type":"object","properties":{"recordType":{"type":"string","description":"The lowercase script ID of the NetSuite record type for the distributed export.\n\nMust be the exact lowercase script ID as defined in NetSuite (e.g., \"customer\", \"salesorder\", \"invoice\", \"vendorbill\").\nThis is NOT the display name - use the script ID which is always lowercase with no spaces.\n\n**Examples**\n- \"customer\"\n- \"invoice\"\n- \"salesorder\"\n- \"itemfulfillment\"\n- \"vendorbill\"\n- \"employee\"\n- \"purchaseorder\"\n- \"creditmemo\"\n\n**Important notes**\n- Must be lowercase script ID, not the display name\n- Custom record types use format \"customrecord_scriptid\""},"executionContext":{"type":"array","description":"An array of execution contexts that will trigger this distributed export.\n\nSpecifies which NetSuite execution contexts should trigger this export. When a record change occurs in one of the specified contexts, the export will be triggered.\n\n**Default value**\nIf not specified, defaults to: [\"userinterface\", \"webstore\"]\n\n**Valid values**\n- \"userinterface\" - User interactions in the NetSuite UI\n- \"webservices\" - SOAP web services calls\n- \"csvimport\" - CSV import operations\n- \"offlineclient\" - Offline client synchronization\n- \"portlet\" - Portlet interactions\n- \"scheduled\" - Scheduled script executions\n- \"suitelet\" - Suitelet executions\n- \"custommassupdate\" - Custom mass update operations\n- \"workflow\" - Workflow actions\n- \"webstore\" - Web store transactions\n- \"userevent\" - User event script triggers\n- \"mapreduce\" - Map/Reduce script operations\n- \"restlet\" - RESTlet API calls\n- \"webapplication\" - Web application interactions\n- \"restwebservices\" - REST web services calls\n\n**Example**\n```json\n[\"userinterface\", \"webstore\"]\n```","default":["userinterface","webstore"],"items":{"type":"string","enum":["userinterface","webservices","csvimport","offlineclient","portlet","scheduled","suitelet","custommassupdate","workflow","webstore","userevent","mapreduce","restlet","webapplication","restwebservices"]}},"disabled":{"type":"boolean","description":"disabled: Indicates whether the distributed feature in NetSuite is disabled or not. This boolean flag controls the availability and operational status of the distributed functionalities within the NetSuite integration, allowing administrators or systems to enable or disable these features as needed.\n\n**Field behavior**\n- When set to true, the distributed feature is fully disabled, preventing any distributed operations or workflows from executing.\n- When set to false or omitted, the distributed feature remains enabled and fully operational.\n- Acts as a toggle switch to control the accessibility of distributed capabilities within the NetSuite environment.\n- Changes to this flag directly influence the behavior of distributed-related processes and integrations.\n\n**Implementation guidance**\n- Use a boolean value: `true` to disable the distributed feature, `false` to enable it.\n- Before disabling, verify that no critical processes depend on distributed functionality to avoid disruptions.\n- Implement validation checks to confirm the current state before initiating distributed operations.\n- Provide clear user notifications or system logs when the feature is disabled to aid in troubleshooting and auditing.\n- Consider the impact on dependent modules and ensure coordinated updates if disabling this feature.\n\n**Examples**\n- `disabled: true`  # The distributed feature is turned off, disabling all related operations.\n- `disabled: false` # The distributed feature is active and available for use.\n- Omitted `disabled` property defaults to `false`, enabling the feature by default.\n\n**Important notes**\n- Disabling this feature may interrupt workflows or processes that rely on distributed capabilities, potentially causing failures or delays.\n- Some systems may require a restart or reinitialization after changing this setting for the change to take full effect.\n- Modifying this property should be restricted to users with appropriate permissions to prevent unauthorized disruptions.\n- Always assess the broader impact on the NetSuite integration before toggling this flag.\n\n**Dependency chain**\n- Directly affects modules and properties that rely on distributed functionality within the NetSuite integration.\n- Should be checked and respected by any API calls, workflows, or processes that involve distributed features.\n- May influence error handling and fallback mechanisms in distributed-related operations.\n\n**Technical details**\n- Data type: Boolean\n- Default value: `false` (distributed feature enabled)\n- Located under the `netsuite.distributed` namespace in the API schema\n- Changing this property triggers state changes in distributed feature availability within the system"},"executionType":{"type":"array","description":"An array of record operation types that will trigger this distributed export.\n\nSpecifies which types of record operations should trigger the export. When a record operation matches one of the specified types, the export will be triggered.\n\n**Default value**\nIf not specified, defaults to: [\"create\", \"edit\", \"xedit\"]\n\n**Valid values**\n- \"create\" - New record creation\n- \"edit\" - Record editing via UI\n- \"delete\" - Record deletion\n- \"xedit\" - Inline editing (edit without opening the record)\n- \"copy\" - Record copy operation\n- \"view\" - Record view\n- \"cancel\" - Transaction cancellation\n- \"approve\" - Approval action\n- \"reject\" - Rejection action\n- \"pack\" - Pack operation (fulfillment)\n- \"ship\" - Ship operation (fulfillment)\n- \"markcomplete\" - Mark as complete\n- \"reassign\" - Reassignment action\n- \"editforecast\" - Forecast editing\n- \"dropship\" - Drop ship operation\n- \"specialorder\" - Special order operation\n- \"orderitems\" - Order items action\n- \"paybills\" - Pay bills action\n- \"print\" - Print action\n- \"email\" - Email action\n\n**Example**\n```json\n[\"create\", \"edit\", \"xedit\"]\n```","default":["create","edit","xedit"],"items":{"type":"string","enum":["create","edit","delete","xedit","copy","view","cancel","approve","reject","pack","ship","markcomplete","reassign","editforecast","dropship","specialorder","orderitems","paybills","print","email"]}},"qualifier":{"type":"object","description":"qualifier: A string value used to specify a particular qualifier or modifier that further defines, categorizes, or scopes the associated data within the NetSuite distributed context. This property enables more granular identification, filtering, and processing of data by applying specific criteria or attributes relevant to business logic, integration workflows, or operational requirements. It serves as an optional but powerful tool to distinguish data subsets, enhance data semantics, and support conditional handling in distributed NetSuite environments.\n\n**Field behavior**\n- Acts as an additional identifier or modifier to refine the meaning, scope, or classification of the associated data.\n- Enables filtering, categorization, or qualification of data entries in distributed NetSuite operations based on specific business rules.\n- Typically optional but may be mandatory in certain contexts or API endpoints where precise data segmentation is required.\n- Accepts string values that correspond to predefined, standardized, or custom qualifiers recognized by the system or integration layer.\n- Supports multiple use cases including regional segmentation, priority tagging, type classification, and channel identification.\n\n**Implementation guidance**\n- Ensure the qualifier value strictly aligns with the accepted set of qualifiers defined in the business domain, integration specifications, or system configuration.\n- Implement validation mechanisms to verify that the qualifier string matches allowed or expected values to avoid errors, misclassification, or unintended behavior.\n- Adopt consistent naming conventions and formatting standards (e.g., lowercase, hyphen-separated) for qualifiers to maintain clarity, readability, and interoperability across systems.\n- Maintain comprehensive documentation of all custom and standard qualifiers used, including their intended meaning and usage scenarios, to facilitate maintenance, troubleshooting, and future integrations.\n- Consider the impact of qualifiers on downstream processing, reporting, and analytics to ensure they are leveraged effectively and do not introduce ambiguity.\n\n**Examples**\n- \"region-us\" to specify data related to the United States region.\n- \"priority-high\" to indicate transactions or records with high priority status.\n- \"type-inventory\" to qualify records associated with inventory management.\n- \"channel-online\" to denote sales or operations conducted through online channels.\n- \"segment-enterprise\" to classify data pertaining to enterprise-level customers.\n- \"status-active\" to filter or identify active records within a dataset.\n\n**Important notes**\n- The qualifier should be meaningful, contextually relevant, and aligned with the business logic to ensure accurate data interpretation.\n- Incorrect, inconsistent, or ambiguous qualifiers can lead to data misinterpretation, processing errors, or integration failures.\n- The property may interact with other filtering"},"skipExportFieldId":{"type":"string","description":"skipExportFieldId is an identifier for a specific field within the NetSuite distributed configuration that determines whether certain data should be excluded from export processes. It serves as a control mechanism to selectively omit data associated with particular fields during export operations, enabling tailored and efficient data handling.\n\n**Field behavior:**  \n- Acts as a flag or marker to skip exporting data associated with the specified field ID.  \n- When set, the export routines will omit the data linked to this field from being included in the export payload.  \n- Helps control and customize the export behavior on a per-field basis within distributed NetSuite configurations.  \n- Does not affect data visibility or storage within NetSuite; it only influences export output.  \n- Supports multiple uses in scenarios where sensitive, redundant, or irrelevant data should be excluded from exports.\n\n**Implementation guidance:**  \n- Ensure the field ID provided corresponds to a valid and existing field within the NetSuite schema to prevent export errors.  \n- Use this property to optimize export operations by excluding unnecessary or sensitive data fields, improving performance and compliance.  \n- Validate the field ID format and existence before applying it to avoid runtime issues during export.  \n- Integrate with export logic to check this property before including fields in the export output, ensuring consistent behavior.  \n- Consider maintaining a centralized list or configuration of skipExportFieldIds for easier management and auditing.  \n\n**Examples:**  \n- skipExportFieldId: \"custbody_internal_notes\" (skips exporting the internal notes custom field)  \n- skipExportFieldId: \"item_custom_field_123\" (excludes a specific item custom field from export)  \n- skipExportFieldId: \"custentity_sensitive_data\" (prevents export of sensitive customer entity data)  \n\n**Important notes:**  \n- This property only affects export operations and does not alter data storage or visibility within NetSuite.  \n- Misconfiguration may lead to incomplete data exports if critical fields are skipped unintentionally, potentially impacting downstream processes.  \n- Should be used judiciously to maintain data integrity and compliance with business rules and regulatory requirements.  \n- Changes to this property should be documented and reviewed to avoid unintended data omissions.  \n\n**Dependency chain**\n- Depends on the existence and validity of the specified field ID within the NetSuite schema.  \n- Relies on export routines to check and respect this property during data export processes.  \n- May interact with other export configuration settings that control data inclusion/exclusion."},"hooks":{"type":"object","properties":{"preSend":{"type":"object","properties":{"fileInternalId":{"type":"string","description":"fileInternalId: The unique internal identifier assigned to a file within the NetSuite system. This identifier is essential for accurately referencing and manipulating a specific file during various operations such as retrieval, update, or deletion within NetSuite's environment. It acts as a primary key that ensures precise targeting of files in automated workflows, scripts, and API calls.\n\n**Field behavior**\n- Serves as a unique and immutable key to identify a file in the NetSuite file cabinet.\n- Utilized in pre-send hooks and other automation points to specify the exact file being processed or referenced.\n- Must correspond to an existing file’s internal ID within the NetSuite account to ensure valid operations.\n- Enables consistent and reliable file operations by linking actions directly to the file’s system-assigned identifier.\n\n**Implementation guidance**\n- Always ensure the value is a valid integer that corresponds to an existing file’s internal ID in NetSuite.\n- Validate the internal ID before performing any file operations to avoid runtime errors or failed transactions.\n- Use this ID when invoking NetSuite APIs, SuiteScript, or other integration points to fetch, update, or delete files.\n- Avoid hardcoding the internal ID; instead, dynamically retrieve it through queries or API calls to maintain adaptability and reduce maintenance overhead.\n- Handle exceptions gracefully when the ID does not correspond to any file, providing meaningful error messages or fallback logic.\n\n**Examples**\n- 12345\n- 987654\n- 1001\n\n**Important notes**\n- The internal ID is system-generated by NetSuite and guaranteed to be unique within the account.\n- This ID is distinct from file names, external URLs, or folder identifiers and should not be confused with them.\n- Using an incorrect or non-existent internal ID will cause operations to fail, potentially interrupting workflows.\n- The internal ID remains constant for the lifetime of the file and does not change even if the file is moved or renamed.\n\n**Dependency chain**\n- Depends on the file existing in the NetSuite file cabinet prior to referencing.\n- Often used alongside related properties such as file name, folder ID, file type, or metadata to provide context or additional filtering.\n- May be required input for downstream processes that manipulate or validate file contents.\n\n**Technical details**\n- Represented as an integer value assigned by NetSuite upon file creation.\n- Immutable once assigned; cannot be altered or reassigned to a different file.\n- Used internally by NetSuite APIs, SuiteScript, and integration"},"function":{"type":"string","description":"function: >\n  Specifies the name of the custom function to be executed as a pre-send hook within the NetSuite distributed system. This function is invoked immediately before sending data or requests, allowing for custom processing, validation, or modification of the payload to ensure data integrity and compliance with business rules.\n\n  **Field behavior**\n  - Defines the exact function to be called prior to sending data or requests.\n  - Enables interception, inspection, and manipulation of data before transmission.\n  - Supports integration of custom business logic, validation, enrichment, or logging steps.\n  - Must reference a valid, accessible function within the current execution context or environment.\n  - The function’s execution outcome can influence whether the sending process proceeds, is modified, or is aborted.\n\n  **Implementation guidance**\n  - Ensure the function name corresponds exactly to a defined function in the codebase, script environment, or registered hooks.\n  - The function should accept the expected input parameters (such as the payload or context) and return appropriate results or modifications.\n  - Implement robust error handling within the function to prevent unhandled exceptions that could disrupt the sending workflow.\n  - Document the function’s purpose, input/output contract, and side effects clearly for maintainability and future reference.\n  - Validate that the function executes efficiently and completes promptly to avoid introducing latency or blocking the sending process.\n  - If asynchronous operations are necessary, ensure they are properly awaited or handled to guarantee completion before sending.\n  - Follow consistent naming conventions aligned with the overall codebase or organizational standards.\n\n  **Examples**\n  - \"validateCustomerData\"\n  - \"sanitizePayloadBeforeSend\"\n  - \"logPreSendActivity\"\n  - \"customAuthorizationCheck\"\n  - \"enrichOrderDetails\"\n  - \"checkInventoryAvailability\"\n\n  **Important notes**\n  - The function must be synchronous or correctly handle asynchronous behavior to ensure it completes before the send operation proceeds.\n  - If the function throws an error or returns a failure state, it may block, modify, or abort the sending process depending on the implementation.\n  - Avoid performing long-running or blocking operations within the function to maintain system responsiveness.\n  - The function should not perform irreversible side effects unless explicitly intended, as it runs prior to data transmission.\n  - Ensure the function does not introduce security vulnerabilities, such as exposing sensitive data or allowing injection attacks.\n  - Consistent and clear error reporting within the function aids in troubleshooting and operational monitoring.\n\n  **Dependency**"},"configuration":{"type":"object","description":"configuration: >\n  Configuration settings for the preSend hook in the NetSuite distributed system.\n  This property defines the parameters and options that control the behavior and execution of the preSend hook, allowing customization of how data is processed before being sent.\n  It enables fine-tuning of operational aspects such as retries, timeouts, validation rules, logging, and payload constraints to ensure reliable and efficient data transmission.\n  **Field behavior**\n  - Specifies the customizable settings that dictate how the preSend hook operates.\n  - Controls data manipulation, validation, and preparation steps prior to sending.\n  - Can include flags, thresholds, retry policies, timeout durations, logging options, and other operational parameters.\n  - May be optional or mandatory depending on the specific implementation and requirements of the preSend hook.\n  - Supports nested configuration objects to allow detailed and structured settings.\n  **Implementation guidance**\n  - Define clear, well-documented configuration options that directly impact the preSend process.\n  - Validate all configuration values rigorously to ensure they conform to expected data types, ranges, and formats.\n  - Provide sensible default values for optional parameters to enhance usability and reduce configuration errors.\n  - Ensure backward compatibility when extending or modifying configuration options.\n  - Include comprehensive documentation for each configuration parameter, including its purpose, accepted values, and effect on hook behavior.\n  - Consider security implications when allowing configuration of headers or other sensitive parameters.\n  **Examples**\n  - `{ \"retryCount\": 3, \"timeout\": 5000, \"enableLogging\": true }`\n  - `{ \"validateSchema\": true, \"maxPayloadSize\": 1048576 }`\n  - `{ \"customHeaders\": { \"X-Custom-Header\": \"value\" } }`\n  - `{ \"retryPolicy\": { \"maxAttempts\": 5, \"backoffStrategy\": \"exponential\" }, \"enableLogging\": false }`\n  - `{ \"payloadCompression\": \"gzip\", \"timeout\": 10000 }`\n  **Important notes**\n  - Incorrect or invalid configuration values can cause the preSend hook to fail or behave unpredictably.\n  - Thorough testing of configuration changes in development or staging environments is critical before deploying to production.\n  - Some configuration changes may require restarting or reinitializing the hook or related services to take effect.\n  - Sensitive configuration parameters should be handled securely to prevent exposure of confidential information.\n  - Configuration should be version-controlled and documented to facilitate maintenance"}},"description":"preSend is a hook function that is invoked immediately before a request is sent to the NetSuite API. It allows for custom processing, modification, or validation of the request payload and headers, enabling dynamic adjustments or logging prior to transmission.\n\n**Field behavior**\n- Executed synchronously or asynchronously just before the API request is dispatched to the NetSuite endpoint.\n- Receives the full request object, including headers, body, query parameters, and other relevant metadata.\n- Permits modification of any part of the request, such as altering headers, adjusting the payload, or changing query parameters.\n- Supports validation logic to ensure the request meets required criteria; throwing an error will abort the request.\n- Enables injection of dynamic data like authentication tokens, custom headers, or correlation IDs.\n- Can be used for logging or auditing outgoing request details for debugging or monitoring purposes.\n\n**Implementation guidance**\n- Implement as a function or asynchronous callback that accepts the request context object.\n- Ensure that any asynchronous operations within the hook are properly awaited to maintain request integrity.\n- Keep processing lightweight to avoid introducing latency or blocking the request pipeline.\n- Handle exceptions carefully; unhandled errors will prevent the request from being sent.\n- Centralize request customization logic here to improve maintainability and reduce duplication.\n- Avoid side effects that could impact other parts of the system or subsequent requests.\n- Validate inputs thoroughly to prevent malformed requests from being sent to the API.\n\n**Examples**\n- Adding a Bearer token or API key to the Authorization header dynamically before sending.\n- Logging the complete request payload and headers for troubleshooting network issues.\n- Modifying request parameters based on user roles or feature flags at runtime.\n- Validating that required fields are present and correctly formatted, throwing an error if validation fails.\n- Adding a unique request ID header for tracing requests across distributed systems.\n\n**Important notes**\n- This hook executes on every outgoing request, so its performance impact should be minimized.\n- Any modifications made within preSend directly affect the final request sent to NetSuite.\n- Throwing an error inside this hook will abort the request and propagate the error upstream.\n- This hook is strictly for pre-request processing and should not be used for handling responses.\n- Avoid making network calls or heavy computations inside this hook to prevent delays.\n- Ensure thread safety if the hook accesses shared resources or global state.\n\n**Dependency chain**\n- Invoked after request construction but before the request is dispatched.\n- Precedes any network transmission or retry"}},"description":"hooks: >\n  A collection of user-defined functions or callbacks that are executed at specific points during the lifecycle of the distributed process within the NetSuite integration. These hooks enable customization and extension of the default behavior by injecting custom logic before, during, or after key operations, allowing for flexible adaptation to unique business requirements and integration scenarios.\n\n  **Field behavior**\n  - Contains one or more functions or callback references mapped to specific lifecycle events.\n  - Each hook corresponds to a distinct event or stage in the distributed process, such as pre-processing, post-processing, error handling, or data transformation.\n  - Hooks are invoked automatically by the system at predefined points in the workflow.\n  - Can modify data payloads, trigger additional workflows or external API calls, perform validations, or handle errors.\n  - Supports both synchronous and asynchronous execution models depending on the hook’s purpose and implementation.\n  - Execution order of hooks for the same event is deterministic and should be documented.\n  - Hooks should be designed to avoid side effects that could impact other parts of the process.\n\n  **Implementation guidance**\n  - Define hooks as named functions or references to executable code blocks compatible with the integration environment.\n  - Ensure hooks are idempotent to prevent unintended consequences from repeated or retried executions.\n  - Validate all inputs and outputs rigorously within hooks to maintain data integrity and system stability.\n  - Use hooks to integrate with external systems, perform custom validations, enrich data, or implement business-specific logic.\n  - Document each hook’s purpose, expected inputs, outputs, and any side effects clearly for maintainability.\n  - Implement robust error handling within hooks to gracefully manage exceptions without disrupting the main process flow.\n  - Test hooks thoroughly in isolated and integrated environments to ensure reliability and performance.\n  - Consider security implications, such as data exposure or injection risks, when implementing hooks.\n\n  **Examples**\n  - A hook that validates transaction data before it is sent to NetSuite to ensure compliance with business rules.\n  - A hook that logs detailed transaction metadata after a successful operation for auditing purposes.\n  - A hook that modifies or enriches payload data during transformation stages to align with NetSuite’s schema.\n  - A hook that triggers email or system notifications upon error occurrences to alert support teams.\n  - A hook that retries failed operations with exponential backoff to improve resilience.\n\n  **Important notes**\n  - Improperly implemented hooks can cause process failures, data inconsistencies, or performance degradation."},"sublists":{"type":"object","description":"sublists: >\n  A collection of related sublist objects associated with the main record, representing grouped sets of data entries that provide additional details or linked information within the NetSuite distributed record context. These sublists enable the organization of complex, hierarchical data by encapsulating related records or line items that belong to the primary record, facilitating detailed data management and interaction within the system.\n  **Field behavior**\n  - Contains multiple sublist entries, each representing a distinct group of related data tied to the main record.\n  - Organizes and structures complex record information into manageable, logically grouped sections.\n  - Supports nested and hierarchical data representation, allowing for detailed and granular record composition.\n  - Typically handled as arrays or lists of sublist objects, supporting iteration and manipulation.\n  - Reflects one-to-many relationships inherent in NetSuite records, such as line items or related entities.\n  **Implementation guidance**\n  - Ensure each sublist object strictly adheres to the defined schema and data types for its specific sublist type.\n  - Validate the consistency and referential integrity of sublist data in relation to the main record to prevent data anomalies.\n  - Design for dynamic handling of sublists, accommodating varying sizes including empty or large collections.\n  - Implement robust CRUD (Create, Read, Update, Delete) operations for sublist entries to maintain accurate and up-to-date data.\n  - Consider transactional integrity when modifying sublists to ensure changes are atomic and consistent.\n  **Examples**\n  - A sales order record containing sublists for item lines detailing products, quantities, and prices; shipping addresses specifying delivery locations; and payment schedules outlining installment plans.\n  - An employee record with sublists for dependents listing family members; employment history capturing previous roles and durations; and certifications documenting professional qualifications.\n  - A customer record including sublists for contacts with communication details; transactions recording purchase history; and communication logs tracking interactions and notes.\n  **Important notes**\n  - Sublists are critical for accurately modeling one-to-many relationships within NetSuite records, enabling detailed data capture and reporting.\n  - Modifications to sublists can trigger business logic, workflows, or validations that affect overall record processing.\n  - Maintaining synchronization between sublists and the main record is essential to preserve data integrity and prevent inconsistencies.\n  - Performance considerations should be taken into account when handling large sublists to optimize system responsiveness.\n  **Dependency chain**\n  - Depends on the main record schema"},"referencedFields":{"type":"object","description":"referencedFields: >\n  A list of field identifiers that are referenced within the current context, typically used to denote dependencies or relationships between fields in a NetSuite distributed environment. This property helps in mapping out how different fields interact or rely on each other, facilitating data integrity, validation, and synchronization across distributed components or services.\n  **Field behavior**\n  - Contains identifiers of fields that the current field or process depends on or interacts with.\n  - Used to establish explicit relationships or dependencies between multiple fields.\n  - Enables tracking of data flow and ensures consistency across distributed systems.\n  - Supports dynamic resolution of dependencies during runtime or configuration.\n  **Implementation guidance**\n  - Populate with valid and existing field identifiers as defined in the NetSuite schema or metadata.\n  - Verify that all referenced fields are accessible and correctly scoped within the current context.\n  - Use this property to manage dependencies critical for data validation, synchronization, or processing logic.\n  - Keep the list updated to reflect any schema changes to avoid broken references or inconsistencies.\n  - Avoid circular references by carefully managing dependencies between fields.\n  **Examples**\n  - [\"customerId\", \"orderDate\", \"shippingAddress\"]\n  - [\"invoiceNumber\", \"paymentStatus\"]\n  - [\"productCode\", \"inventoryLevel\", \"reorderThreshold\"]\n  **Important notes**\n  - Referenced fields must be unique within the list to prevent redundancy and confusion.\n  - Modifications to referenced fields can impact dependent processes; changes should be tested thoroughly.\n  - This property contains only the identifiers (names or keys) of fields, not their actual data or values.\n  - Proper documentation of referenced fields improves maintainability and clarity of dependencies.\n  **Dependency chain**\n  - Often linked with fields that require validation or data aggregation from other fields.\n  - May influence or be influenced by business rules, workflows, or automation scripts that depend on multiple fields.\n  - Changes in referenced fields can cascade to affect dependent fields or processes.\n  **Technical details**\n  - Data type: Array of strings.\n  - Each string represents a unique field identifier within the NetSuite distributed environment.\n  - The array should be serialized in a format compatible with the consuming system (e.g., JSON array).\n  - Maximum length and allowed characters for field identifiers should conform to NetSuite naming conventions."},"relatedLists":{"type":"object","description":"relatedLists: A collection of related list objects that represent associated records or entities linked to the primary record within the NetSuite distributed data model. These related lists provide contextual information and enable navigation to connected data, facilitating comprehensive data retrieval and management. Each related list encapsulates a set of records that share a defined relationship with the primary record, such as transactions, contacts, or custom entities, thereby supporting a holistic view of the data ecosystem.\n\n**Field behavior**\n- Contains multiple related list entries, each representing a distinct association to the primary record.\n- Enables retrieval of linked records such as transactions, custom records, subsidiary data, or other relevant entities.\n- Supports hierarchical or relational data structures by referencing related entities, allowing nested or multi-level associations.\n- Typically read-only in the context of distributed data retrieval but may support updates or synchronization depending on API capabilities and permissions.\n- May include metadata such as record counts, last updated timestamps, or status indicators for each related list.\n- Supports dynamic inclusion or exclusion based on user permissions, record type, and system configuration.\n\n**Implementation guidance**\n- Populate with relevant related list objects that are directly associated with the primary record, ensuring accurate representation of relationships.\n- Ensure each related list entry includes unique identifiers, descriptive metadata, and navigation links or references necessary for data access and traversal.\n- Maintain consistency in naming conventions, data structures, and field formats to align with NetSuite’s standard data model and API specifications.\n- Implement pagination, filtering, or sorting mechanisms to efficiently handle large sets of related records within each list.\n- Validate all references and links to ensure data integrity, preventing broken or stale connections within the distributed data environment.\n- Consider caching strategies or incremental updates to optimize performance when dealing with frequently accessed related lists.\n- Respect and enforce access control and permission checks to ensure users only see related lists they are authorized to access.\n\n**Examples**\n- A customer record’s relatedLists might include “Transactions” (e.g., sales orders, invoices), “Contacts” (associated individuals), and “Cases” (customer support tickets).\n- An invoice record’s relatedLists could contain “Payments” (payment records), “Shipments” (delivery details), and “Adjustments” (billing corrections).\n- A custom record type might have relatedLists such as “Attachments” (files linked to the record) or “Notes” (user comments or annotations).\n- A vendor record’s relatedLists may include “Purchase Orders,” “Bills,” and “Vendor Contacts"},"forceReload":{"type":"boolean","description":"forceReload: Indicates whether the system should forcibly reload the data or configuration, bypassing any cached or stored versions to ensure the most up-to-date information is used. This flag is critical in scenarios where data accuracy and freshness are paramount, such as after configuration changes or data updates that must be immediately reflected.\n\n**Field behavior:**\n- When set to true, the system bypasses all caches and reloads data or configurations directly from the primary source, ensuring the latest state is retrieved.\n- When set to false or omitted, the system may utilize cached or previously stored data to optimize performance and reduce load times.\n- Primarily used in contexts where stale data could lead to errors, inconsistencies, or outdated processing results.\n- The reload operation triggered by this flag typically involves invalidating caches and refreshing dependent components or services.\n\n**Implementation guidance:**\n- Use this flag judiciously to balance between data freshness and system performance, avoiding unnecessary reloads that could degrade responsiveness.\n- Ensure that enabling forceReload initiates a comprehensive refresh cycle, including clearing relevant caches and reinitializing configuration or data layers.\n- Implement robust error handling during the reload process to manage potential failures without causing system downtime or inconsistent states.\n- Monitor system resource utilization and response times when forceReload is active to identify and mitigate performance bottlenecks.\n- Document scenarios and triggers for using forceReload to guide developers and operators in its appropriate application.\n\n**Examples:**\n- forceReload: true  \n  (forces the system to bypass caches and reload data/configuration from the authoritative source immediately)\n- forceReload: false  \n  (allows the system to serve data from cache if available, improving response time)\n- forceReload omitted  \n  (defaults to false behavior, relying on cached data unless otherwise specified)\n\n**Important notes:**\n- Excessive or unnecessary use of forceReload can lead to increased latency, higher resource consumption, and potential service degradation.\n- This flag does not validate the correctness or integrity of the source data; it only ensures the latest available data is fetched.\n- Downstream systems or processes should be designed to handle the potential delays or transient states caused by forced reloads.\n- Coordination with cache invalidation policies and data synchronization mechanisms is essential to maintain overall system consistency.\n\n**Dependency chain:**\n- Relies on underlying cache management and invalidation frameworks to effectively bypass stored data.\n- Interacts with data retrieval modules, configuration loaders, and possibly distributed synchronization services.\n- May trigger logging, monitoring, or"},"ioEnvironment":{"type":"string","description":"ioEnvironment specifies the input/output environment configuration for the NetSuite distributed system, defining how data is handled, processed, and routed across different operational environments. This property determines the context in which I/O operations occur, influencing data flow, security protocols, performance characteristics, and consistency guarantees within the distributed architecture.\n\n**Field behavior**\n- Determines the operational context for all input/output processes within the distributed NetSuite system.\n- Influences how data is read from and written to various storage systems, message queues, or communication channels.\n- Affects performance tuning, security measures, and data consistency mechanisms based on the selected environment.\n- Typically set during system initialization or configuration phases and remains stable during runtime to ensure predictable behavior.\n- May trigger environment-specific logging, monitoring, and error-handling strategies.\n\n**Implementation guidance**\n- Validate the ioEnvironment value against a predefined set of supported environments such as \"development,\" \"staging,\" \"production,\" and any custom configurations.\n- Ensure that the selected environment is compatible with other system settings related to data handling, network communication, and security policies.\n- Implement robust error handling and fallback mechanisms to manage unsupported or invalid environment values gracefully.\n- Clearly document the operational implications, limitations, and recommended use cases for each environment option to guide system administrators and developers.\n- Coordinate environment settings across all distributed nodes to maintain consistency and prevent configuration drift.\n\n**Examples**\n- \"development\" — used for local testing and debugging with relaxed security and simplified data handling.\n- \"staging\" — a pre-production environment that closely mirrors production settings for validation and testing.\n- \"production\" — the live environment optimized for security, performance, and data integrity.\n- \"custom\" — user-defined environment configurations tailored for specialized I/O requirements or experimental setups.\n\n**Important notes**\n- Changing the ioEnvironment typically requires restarting services or reinitializing connections to apply new configurations.\n- The environment setting directly impacts data integrity, access controls, and compliance with security policies.\n- Sensitive data must be handled according to the security standards appropriate for the selected environment.\n- Consistency across all distributed nodes is critical; all nodes should be configured with compatible ioEnvironment values to avoid data inconsistencies or communication failures.\n- Misconfiguration can lead to degraded performance, security vulnerabilities, or data loss.\n\n**Dependency chain**\n- Depends on system initialization and configuration management components.\n- Interacts with data storage modules, network communication layers, and security frameworks.\n- Influences logging, monitoring, and error"},"ioDomain":{"type":"string","description":"ioDomain specifies the Internet domain name used for input/output operations within the distributed NetSuite environment. This domain is critical for routing data requests and responses between distributed components and services, ensuring seamless communication and integration across the system.\n\n**Field behavior**\n- Defines the domain name utilized for network communication in distributed NetSuite environments.\n- Serves as the base domain for constructing URLs for API calls, data synchronization, and service endpoints.\n- Must be a valid, fully qualified domain name (FQDN) adhering to DNS standards.\n- Typically remains consistent within a deployment environment but can differ across environments such as development, staging, and production.\n- Influences routing, load balancing, and failover mechanisms within distributed services.\n\n**Implementation guidance**\n- Verify that the domain is properly configured in DNS and is resolvable by all distributed components.\n- Validate the domain format against standard domain naming conventions (e.g., RFC 1035).\n- Ensure the domain supports secure communication protocols (e.g., HTTPS with valid SSL/TLS certificates).\n- Coordinate updates to ioDomain with network, security, and operations teams to maintain service continuity.\n- When migrating or scaling services, update ioDomain accordingly and propagate changes to all dependent components.\n- Monitor domain accessibility and performance to detect and resolve connectivity issues promptly.\n\n**Examples**\n- \"api.netsuite.com\"\n- \"distributed-services.companydomain.com\"\n- \"staging-netsuite.io.company.com\"\n- \"eu-west-1.api.netsuite.com\"\n- \"dev-networks.internal.company.com\"\n\n**Important notes**\n- Incorrect or misconfigured ioDomain values can cause failed network requests, service interruptions, and data synchronization errors.\n- The domain must support necessary security certificates to enable encrypted communication and protect data in transit.\n- Changes to ioDomain may necessitate updates to firewall rules, proxy configurations, and network security policies.\n- Consistency in ioDomain usage across distributed components is essential to avoid routing conflicts and authentication issues.\n- Consider the impact on caching, CDN configurations, and DNS propagation delays when changing ioDomain.\n\n**Dependency chain**\n- Dependent on underlying network infrastructure, DNS setup, and domain registration.\n- Utilized by distributed service components for constructing communication endpoints.\n- May affect authentication and authorization workflows that rely on domain validation or origin verification.\n- Interacts with security components such as SSL/TLS certificate management and firewall configurations.\n- Influences monitoring, logging, and troubleshooting processes related to network communication."},"lastSyncedDate":{"type":"string","format":"date-time","description":"lastSyncedDate represents the precise date and time when the data was last successfully synchronized between the system and NetSuite. This timestamp is essential for monitoring the freshness, consistency, and integrity of synchronized data, enabling systems to determine whether updates or incremental syncs are necessary.\n\n**Field behavior**\n- Captures the exact date and time of the most recent successful synchronization event.\n- Automatically updates only after a sync operation completes successfully without errors.\n- Serves as a reference point to assess if data is current or requires refreshing.\n- Typically stored and transmitted in ISO 8601 format to maintain uniformity across different systems and platforms.\n- Does not reflect the start time or duration of the synchronization process, only its successful completion.\n\n**Implementation guidance**\n- Record the timestamp in Coordinated Universal Time (UTC) to prevent timezone-related inconsistencies.\n- Update this field exclusively after confirming a successful synchronization to avoid misleading data states.\n- Validate the date and time format rigorously to comply with ISO 8601 standards (e.g., \"YYYY-MM-DDTHH:mm:ssZ\").\n- Utilize this timestamp to drive incremental synchronization logic, data refresh triggers, or audit trails in downstream workflows.\n- Handle cases where the field may be null or missing, indicating that no synchronization has occurred yet.\n\n**Examples**\n- \"2024-06-15T14:30:00Z\"\n- \"2023-12-01T08:45:22Z\"\n- \"2024-01-10T23:59:59Z\"\n\n**Important notes**\n- This timestamp marks the completion of synchronization, not its initiation.\n- Do not update this field if the synchronization process fails or is incomplete.\n- Maintaining timezone consistency (UTC) is critical to avoid synchronization conflicts or data mismatches.\n- The field may be null or omitted if synchronization has never been performed.\n- Systems relying on this field should implement fallback or error handling for missing or invalid timestamps.\n\n**Dependency chain**\n- Depends on successful completion of the synchronization process between the system and NetSuite.\n- Influences downstream processes such as incremental sync triggers, data validation, and audit logging.\n- May be referenced by monitoring or alerting systems to detect synchronization delays or failures.\n\n**Technical details**\n- Stored as a string in ISO 8601 format with UTC timezone designator (e.g., \"YYYY-MM-DDTHH:mm:ssZ\").\n- Should be generated programmatically at the moment synchronization completes successfully"},"settings":{"type":"object","description":"settings: >\n  Configuration settings specific to the distributed module within the NetSuite integration, enabling fine-grained control over distributed processing behavior and performance optimization.\n  **Field behavior**\n  - Encapsulates a collection of key-value pairs representing various configuration parameters that govern the distributed NetSuite integration’s operation.\n  - Includes toggles (boolean flags), numeric thresholds, timeouts, batch sizes, logging options, and other customizable settings relevant to distributed processing workflows.\n  - Typically optional for basic usage but essential for advanced customization, performance tuning, and adapting the integration to specific deployment environments.\n  - Changes to these settings can dynamically alter the integration’s behavior, such as retry logic, concurrency limits, and error handling strategies.\n  **Implementation guidance**\n  - Define each setting with a clear, descriptive key name and an appropriate data type (e.g., integer, boolean, string).\n  - Validate input values rigorously to ensure they fall within acceptable ranges or conform to expected formats to prevent runtime errors.\n  - Provide sensible default values for all settings to maintain stable and predictable integration behavior when explicit configuration is absent.\n  - Document each setting comprehensively, including its purpose, valid values, default, and impact on the integration’s operation.\n  - Consider versioning or schema validation to manage changes in settings structure over time.\n  - Ensure that sensitive information is either excluded or securely handled if included within settings.\n  **Examples**\n  - `{ \"retryCount\": 3, \"enableLogging\": true, \"timeoutSeconds\": 120 }` — configures retry attempts, enables detailed logging, and sets operation timeout.\n  - `{ \"batchSize\": 50, \"useNonProduction\": false }` — sets the number of records processed per batch and specifies production environment usage.\n  - `{ \"maxConcurrentJobs\": 10, \"errorThreshold\": 5, \"logLevel\": \"DEBUG\" }` — limits concurrent jobs, sets error tolerance, and defines logging verbosity.\n  **Important notes**\n  - Modifications to settings may require restarting or reinitializing the integration service to apply changes effectively.\n  - Incorrect or suboptimal configuration can cause integration failures, data inconsistencies, or degraded performance.\n  - Avoid storing sensitive credentials or secrets in settings unless encrypted or otherwise secured.\n  - Settings should be managed carefully in multi-environment deployments to prevent configuration drift.\n  **Dependency chain**\n  - Dependent on the overall NetSuite integration configuration and"},"useSS2Framework":{"type":"boolean","description":"useSS2Framework indicates whether to utilize the SuiteScript 2.0 framework for the NetSuite distributed configuration, enabling modern scripting capabilities and modular architecture within the NetSuite environment.\n\n**Field behavior**\n- Determines if the SuiteScript 2.0 (SS2) framework is enabled for the NetSuite integration.\n- When set to true, the system uses SS2 APIs, modular script definitions, and updated scripting conventions.\n- When set to false or omitted, the system defaults to using SuiteScript 1.0 or legacy frameworks.\n- Influences script loading mechanisms, module resolution, and API compatibility within NetSuite.\n- Impacts debugging, deployment, and maintenance processes due to differences in framework structure.\n\n**Implementation guidance**\n- Set this property to true to leverage modern SuiteScript 2.0 features such as improved modularity, asynchronous processing, and enhanced performance.\n- Verify that all custom scripts, modules, and third-party integrations are fully compatible with SuiteScript 2.0 before enabling this flag.\n- Conduct thorough testing in a non-production or development environment to identify potential issues arising from framework changes.\n- Use this flag to facilitate gradual migration from legacy SuiteScript 1.0 to SuiteScript 2.0, allowing toggling between frameworks during transition phases.\n- Update deployment pipelines and CI/CD processes to accommodate SuiteScript 2.0 packaging and module formats.\n\n**Examples**\n- `useSS2Framework: true` — Enables SuiteScript 2.0 framework usage, activating modern scripting features.\n- `useSS2Framework: false` — Disables SuiteScript 2.0, falling back to legacy SuiteScript 1.0 framework.\n- Property omitted — Defaults to legacy SuiteScript framework (typically 1.0), maintaining backward compatibility.\n\n**Important notes**\n- Enabling the SS2 framework may require refactoring existing scripts to comply with SuiteScript 2.0 syntax, including the use of define/require for module loading.\n- Some legacy APIs, global objects, and modules available in SuiteScript 1.0 may be deprecated or behave differently in SuiteScript 2.0.\n- Performance improvements and new features in SS2 may not be realized if scripts are not properly adapted.\n- Ensure that all scheduled scripts, workflows, and integrations are reviewed for compatibility to prevent runtime errors.\n- Documentation and developer training may be necessary to fully leverage SuiteScript 2.0 capabilities.\n\n**Dependency chain**\n- Dep"},"frameworkVersion":{"type":"object","properties":{"type":{"type":"string","description":"type: The type property specifies the category or classification of the framework version within the NetSuite distributed system. It defines the nature or role of the framework version, such as whether it is a major release, minor update, patch, or experimental build. This classification is essential for managing version control, deployment strategies, and compatibility assessments across the distributed environment.\n\n**Field behavior**\n- Determines how the framework version is identified, categorized, and processed within the system.\n- Influences compatibility checks, update mechanisms, and deployment workflows.\n- Enables filtering, sorting, and selection of framework versions based on their type.\n- Affects automated decision-making processes such as rollback, promotion, or deprecation of versions.\n\n**Implementation guidance**\n- Use standardized, predefined values or enumerations to represent different types (e.g., \"major\", \"minor\", \"patch\", \"experimental\") to ensure consistency.\n- Maintain uniform naming conventions and case sensitivity across all framework versions.\n- Implement validation logic to restrict the type property to allowed categories, preventing invalid or unsupported entries.\n- Document any custom or extended types clearly to avoid ambiguity.\n- Ensure that changes to the type property trigger appropriate notifications or logging for audit purposes.\n\n**Examples**\n- \"major\" — indicating a significant release that introduces new features or breaking changes.\n- \"minor\" — representing smaller updates that add enhancements or non-breaking improvements.\n- \"patch\" — for releases focused on bug fixes, security patches, or minor corrections.\n- \"experimental\" — denoting versions under testing, development, or not intended for production use.\n- \"deprecated\" — marking versions that are no longer supported or recommended for use.\n\n**Important notes**\n- The type value directly impacts deployment strategies, including automated rollouts and rollback procedures.\n- Accurate and consistent typing is critical for automated update systems and dependency management tools to function correctly.\n- Changes to the type property should be documented thoroughly and communicated to all relevant stakeholders to avoid confusion.\n- Misclassification can lead to improper handling of versions, potentially causing system instability or incompatibility.\n- The property should be reviewed regularly to align with evolving release management policies.\n\n**Dependency chain**\n- Depends on the version numbering scheme and release management policies defined in the frameworkVersion.\n- Interacts with deployment, update, and compatibility modules that rely on the type classification to determine appropriate actions.\n- Influences compatibility checks with other components and services within the NetSuite distributed environment.\n- May affect logging, monitoring, and alert"},"enum":{"type":"array","items":{"type":"object"},"description":"A list of predefined string values that represent the allowed versions of the framework in the NetSuite distributed environment. This enumeration restricts the frameworkVersion property to accept only specific, valid version identifiers, ensuring consistency and preventing invalid version usage across configurations and API interactions.\n\n**Field behavior**\n- Defines the complete set of permissible values for the frameworkVersion property.\n- Enforces validation by restricting inputs to only those versions listed in the enum.\n- Facilitates consistent version management across different components and services.\n- Typically implemented as an array of strings, where each string corresponds to a valid framework version identifier.\n- Serves as a source of truth for supported framework versions in the system.\n\n**Implementation guidance**\n- Populate the enum with all currently supported framework version strings, reflecting official releases.\n- Regularly update the enum to add new versions and deprecate obsolete ones in alignment with release cycles.\n- Use the enum to validate user inputs, API requests, and configuration files to prevent invalid or unsupported versions.\n- Integrate the enum values into UI elements such as dropdown menus or selection lists to guide users in choosing valid versions.\n- Ensure synchronization between the enum values and the system’s version recognition logic to avoid discrepancies.\n- Document any changes to the enum clearly to inform developers and users about version support updates.\n\n**Examples**\n- [\"1.0.0\", \"1.1.0\", \"2.0.0\"]\n- [\"v2023.1\", \"v2023.2\", \"v2024.1\"]\n- [\"stable\", \"beta\", \"alpha\"]\n- [\"release-2023Q2\", \"release-2023Q3\", \"release-2024Q1\"]\n\n**Important notes**\n- Enum values must exactly match the version identifiers recognized by the system, including case sensitivity.\n- Modifications to the enum (adding/removing versions) should be performed carefully to maintain backward compatibility.\n- The enum itself does not specify a default version; default version handling should be managed separately in the system.\n- Consistency in formatting and naming conventions of version strings within the enum is critical to avoid confusion.\n- The enum should be treated as authoritative for validation purposes and not overridden by external inputs.\n\n**Dependency chain**\n- Used by the frameworkVersion property to restrict allowed values.\n- Relied upon by validation logic in APIs and configuration parsers.\n- Integrated with UI components for version selection.\n- Maintained in coordination with the system’s version management"},"lowercase":{"type":"object","description":"lowercase: Specifies whether the framework version string should be converted to lowercase characters to ensure consistent casing across outputs and integrations.\n\n**Field behavior**\n- When set to `true`, the framework version string is converted entirely to lowercase characters.\n- When set to `false` or omitted, the framework version string retains its original casing as provided.\n- Influences how the framework version is displayed in logs, API responses, configuration files, or any output where the version string is used.\n- Does not modify the content or structure of the version string, only its letter casing.\n\n**Implementation guidance**\n- Accept only boolean values (`true` or `false`) for this property.\n- Perform the lowercase transformation after the framework version string is generated or retrieved but before it is output, stored, or transmitted.\n- Default behavior should be to preserve the original casing if this property is not explicitly set.\n- Use this property to maintain consistency in environments where case sensitivity affects processing or comparison of version strings.\n- Ensure that any caching or storage mechanisms reflect the transformed casing if this property is enabled.\n\n**Examples**\n- `true` — The framework version `\"V1.2.3\"` becomes `\"v1.2.3\"`.\n- `true` — The framework version `\"v1.2.3\"` remains `\"v1.2.3\"` (already lowercase).\n- `false` — The framework version `\"V1.2.3\"` remains `\"V1.2.3\"`.\n- Property omitted — The framework version string is output exactly as originally provided.\n\n**Important notes**\n- This property only affects letter casing; it does not alter the version string’s format, numeric values, or other characters.\n- Downstream systems or integrations that consume the version string should be verified to handle the casing appropriately.\n- Changing the casing may impact string equality checks or version comparisons if those are case-sensitive.\n- Consider the implications on logging, monitoring, or auditing systems that may rely on exact version string matches.\n\n**Dependency chain**\n- Depends on the presence of a valid framework version string to apply the transformation.\n- Should be evaluated after the framework version is fully constructed or retrieved.\n- May interact with other formatting or normalization properties related to the framework version.\n\n**Technical details**\n- Implemented as a boolean flag within the `frameworkVersion` configuration object.\n- Transformation typically uses standard string lowercase functions provided by the programming environment.\n- Should be applied consistently across"}},"description":"frameworkVersion: The specific version identifier of the software framework used within the NetSuite distributed environment. This version string is essential for tracking the exact iteration of the framework deployed, performing compatibility checks between distributed components, and ensuring consistency across the system. It typically follows semantic versioning or a similar structured versioning scheme to convey major, minor, and patch-level changes, including pre-release or build metadata when applicable.\n\n**Field behavior**\n- Represents the precise version of the software framework currently in use within the distributed environment.\n- Serves as a key reference for verifying compatibility between different components and services.\n- Facilitates debugging, support, and audit processes by clearly identifying the framework iteration.\n- Typically adheres to semantic versioning (e.g., MAJOR.MINOR.PATCH) or a comparable versioning format.\n- Remains stable and immutable once a deployment is finalized to ensure traceability.\n\n**Implementation guidance**\n- Maintain a consistent and standardized version string format, such as \"1.2.3\", \"v2.0.0\", or date-based versions like \"2024.06.01\".\n- Update this property promptly whenever the framework undergoes upgrades, patches, or significant changes.\n- Validate the version string against a predefined list of supported or recognized framework versions to prevent errors.\n- Integrate this property into deployment automation, monitoring, and logging tools to verify correct framework usage.\n- Avoid modifying the frameworkVersion post-deployment to maintain historical accuracy and supportability.\n\n**Examples**\n- \"1.0.0\"\n- \"2.3.5\"\n- \"v3.1.0-beta\"\n- \"2024.06.01\"\n- \"1.4.0-rc1\"\n\n**Important notes**\n- Accurate frameworkVersion values are critical to prevent compatibility issues and runtime failures in distributed systems.\n- Missing, incorrect, or inconsistent version identifiers can lead to deployment errors, integration problems, or difficult-to-trace bugs.\n- This property should be treated as a source of truth for framework versioning within the NetSuite distributed environment.\n- Coordination with other versioning properties (e.g., applicationVersion, apiVersion) is important for holistic version management.\n\n**Dependency chain**\n- Dependent on the overarching NetSuite distributed system versioning and release management strategy.\n- Closely related to other versioning properties such as applicationVersion and apiVersion for comprehensive compatibility checks.\n- Influences deployment pipelines, runtime environment validations, and compatibility enforcement"}},"description":"Indicates whether the transaction or record is distributed across multiple departments, locations, classes, or subsidiaries within the NetSuite system, allowing for detailed allocation of amounts or quantities for financial tracking and reporting purposes.\n\n**Field behavior**\n- Specifies if the transaction’s amounts or quantities are allocated across multiple organizational segments such as departments, locations, classes, or subsidiaries.\n- When set to `true`, the transaction supports detailed distribution, enabling granular financial analysis and reporting.\n- When set to `false` or omitted, the transaction is treated as assigned to a single segment without any distribution.\n- Influences how the transaction data is processed, posted, and reported within NetSuite’s financial modules.\n\n**Implementation guidance**\n- Use this boolean field to indicate whether a transaction involves distributed allocations.\n- When `distributed` is `true`, ensure that corresponding distribution details (e.g., department, location, class, subsidiary allocations) are provided in related fields to fully define the distribution.\n- Validate that the sum of all distributed amounts or quantities equals the total transaction amount to maintain data integrity.\n- Confirm that the specific NetSuite record type supports distribution before setting this field to `true`.\n- Handle this field carefully in integrations to avoid discrepancies in accounting or reporting.\n\n**Examples**\n- `distributed: true` — The transaction amounts are allocated across multiple departments and locations.\n- `distributed: false` — The transaction is assigned to a single department without any distribution.\n- Omitted `distributed` field — Defaults to non-distributed transaction behavior.\n\n**Important notes**\n- Enabling distribution (`distributed: true`) often requires additional detailed data to specify how amounts are allocated.\n- Not all transaction or record types in NetSuite support distribution; verify compatibility beforehand.\n- Incorrect or incomplete distribution data can lead to accounting errors or integration failures.\n- Distribution affects financial reporting and posting; ensure consistency across related fields.\n\n**Dependency chain**\n- Commonly used alongside fields specifying distribution details such as `department`, `location`, `class`, and `subsidiary`.\n- May impact related posting, reporting, and reconciliation processes within NetSuite’s financial modules.\n- Dependent on the transaction type’s capability to support distributed allocations.\n\n**Technical details**\n- Data type: Boolean (`true` or `false`).\n- Default behavior when omitted is typically `false` (non-distributed).\n- Must be synchronized with distribution detail records to ensure accurate financial data.\n- Changes to this field may trigger validation or"},"getList":{"type":"object","properties":{"type":{"type":"array","items":{"type":"object"},"description":"type: >\n  Specifies the category or classification of the records to be retrieved from the NetSuite system.\n  This property determines the type of entities that the getList operation will query and return.\n  It defines the scope of the data retrieval by indicating which NetSuite record type the API should target.\n  **Field behavior**\n  - Defines the specific record type to fetch, such as customers, transactions, or items.\n  - Influences the structure, fields, and format of the returned data based on the selected record type.\n  - Must be set to a valid NetSuite record type identifier recognized by the API.\n  - Directly impacts the filtering, sorting, and pagination capabilities available for the query.\n  **Implementation guidance**\n  - Use predefined constants or enumerations representing NetSuite record types to avoid errors and ensure consistency.\n  - Validate the type value before making the API call to confirm it corresponds to a supported and accessible record type.\n  - Consider user permissions and roles associated with the record type to ensure the API caller has appropriate access rights.\n  - Review NetSuite documentation for the exact record type identifiers and their expected behaviors.\n  - When possible, test with sample queries to verify the returned data matches expectations for the specified type.\n  **Examples**\n  - \"customer\"\n  - \"salesOrder\"\n  - \"inventoryItem\"\n  - \"employee\"\n  - \"vendor\"\n  - \"purchaseOrder\"\n  **Important notes**\n  - Incorrect or unsupported type values will result in API errors, empty responses, or unexpected data structures.\n  - The type property directly affects query performance, response size, and the complexity of the returned data.\n  - Some record types may require additional filters, parameters, or specific permissions to retrieve meaningful or complete data.\n  - Changes in NetSuite schema or API versions may introduce new record types or deprecate existing ones; keep the type values up to date.\n  **Dependency chain**\n  - The 'type' property is a required input for the NetSuite.getList operation.\n  - The value of 'type' determines the schema, fields, and structure of the records returned in the response.\n  - Other properties, filters, or parameters in the getList operation may depend on or vary according to the specified 'type'.\n  - Validation and error handling mechanisms rely on the correctness of the 'type' value.\n  **Technical details**\n  - Accepts string values corresponding"},"typeId":{"type":"string","description":"typeId: >\n  The unique identifier representing the specific type of record or entity to be retrieved in the NetSuite getList operation.\n  **Field behavior**\n  - Specifies the category or type of records to fetch from NetSuite.\n  - Determines the schema and fields available in the returned records.\n  - Must correspond to a valid NetSuite record type identifier.\n  **Implementation guidance**\n  - Use predefined NetSuite record type IDs as per NetSuite documentation.\n  - Validate the typeId before making the API call to avoid errors.\n  - Ensure the typeId aligns with the permissions and roles of the API user.\n  **Examples**\n  - \"customer\" for customer records.\n  - \"salesOrder\" for sales order records.\n  - \"employee\" for employee records.\n  **Important notes**\n  - Incorrect or unsupported typeId values will result in API errors.\n  - The typeId is case-sensitive and must match NetSuite's expected values.\n  - Changes in NetSuite's API or record types may affect valid typeId values.\n  **Dependency chain**\n  - Depends on the NetSuite record types supported by the account.\n  - Influences the structure and content of the getList response.\n  **Technical details**\n  - Typically a string value representing the internal NetSuite record type.\n  - Used as a parameter in the getList API endpoint to filter records.\n  - Must be URL-encoded if used in query parameters."},"internalId":{"type":"string","description":"Unique identifier assigned internally to an entity within the NetSuite system. This identifier is used to reference and retrieve specific records programmatically.\n\n**Field behavior**\n- Serves as the primary key for identifying records in NetSuite.\n- Used in API calls to fetch, update, or delete specific records.\n- Immutable once assigned to a record.\n- Typically a numeric or alphanumeric string.\n\n**Implementation guidance**\n- Must be provided when performing operations that require precise record identification.\n- Should be validated to ensure it corresponds to an existing record before use.\n- Avoid exposing internalId values in public-facing contexts to maintain data security.\n- Use in conjunction with other identifiers if necessary to ensure correct record targeting.\n\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"A1B2C3\"\n\n**Important notes**\n- internalId is unique within the scope of the record type.\n- Different record types may have overlapping internalId values; always confirm the record type context.\n- Not user-editable; assigned by NetSuite upon record creation.\n- Essential for batch operations where multiple records are processed by their internalIds.\n\n**Dependency chain**\n- Dependent on the record type being accessed or manipulated.\n- Used alongside other API parameters such as record type or externalId for comprehensive identification.\n\n**Technical details**\n- Data type: string (often numeric but can include alphanumeric characters).\n- Read-only from the API consumer perspective.\n- Returned in API responses when querying records.\n- Used as a key parameter in getList, get, update, and delete API operations."},"externalId":{"type":"string","description":"A unique identifier assigned to an entity or record by an external system, used to reference or synchronize data between systems.\n\n**Field behavior**\n- Serves as a unique key to identify records originating outside the current system.\n- Used to retrieve, update, or synchronize records with external systems.\n- Typically immutable once assigned to maintain consistent references.\n- May be optional or required depending on the integration context.\n\n**Implementation guidance**\n- Ensure the externalId is unique within the scope of the external system.\n- Validate the format and length according to the external system’s specifications.\n- Use this field to map or link records between the local system and external sources.\n- Handle cases where the externalId might be missing or duplicated gracefully.\n- Document the source system and context for the externalId to avoid ambiguity.\n\n**Examples**\n- \"INV-12345\" (Invoice number from an accounting system)\n- \"CRM-987654321\" (Customer ID from a CRM platform)\n- \"EXT-USER-001\" (User identifier from an external user management system)\n\n**Important notes**\n- The externalId is distinct from the internal system’s primary key or record ID.\n- Changes to the externalId can disrupt synchronization and should be avoided.\n- When integrating multiple external systems, ensure externalIds are namespaced or otherwise differentiated.\n- Not all records may have an externalId if they originate solely within the local system.\n\n**Dependency chain**\n- Often used in conjunction with other identifiers like internal IDs or system-specific keys.\n- May depend on authentication or authorization to access external system data.\n- Relies on consistent data synchronization processes to maintain accuracy.\n\n**Technical details**\n- Typically represented as a string data type.\n- May include alphanumeric characters, dashes, or underscores.\n- Should be indexed in databases for efficient lookup.\n- May require encoding or escaping if used in URLs or queries."},"_id":{"type":"object","description":"_id: The unique identifier for a record within the NetSuite system. This identifier is used to retrieve, update, or reference specific records in API operations.\n**Field behavior**\n- Serves as the primary key for records in NetSuite.\n- Must be unique within the context of the record type.\n- Used to fetch or manipulate specific records via API calls.\n- Immutable once assigned to a record.\n**Implementation guidance**\n- Ensure the _id is correctly captured from NetSuite responses when retrieving records.\n- Validate the _id format as per NetSuite’s specifications before using it in requests.\n- Use the _id to perform precise operations such as updates or deletions.\n- Handle cases where the _id may not be present or is invalid gracefully.\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"987654321\"\n**Important notes**\n- The _id is critical for identifying records uniquely; incorrect usage can lead to data inconsistencies.\n- Do not generate or alter _id values manually; always use those provided by NetSuite.\n- The _id is typically a numeric string but confirm with the specific NetSuite record type.\n**Dependency chain**\n- Dependent on the record type being accessed or manipulated.\n- Used in conjunction with other record fields for comprehensive data operations.\n**Technical details**\n- Typically represented as a string or numeric value.\n- Returned in API responses when listing or retrieving records.\n- Required in API requests for operations targeting specific records."}},"description":"getList: Retrieves a list of records from the NetSuite system based on specified criteria and parameters. This operation enables fetching multiple records in a single request, optimizing data retrieval and processing efficiency. It supports filtering, sorting, and pagination to manage large datasets effectively, and returns both the matching records and relevant metadata about the query results.\n\n**Field behavior**\n- Accepts parameters defining the record type to retrieve, along with optional filters, search criteria, and sorting options.\n- Returns a collection (list) of records that match the specified criteria.\n- Supports pagination by allowing clients to specify limits and offsets or use tokens to navigate through large result sets.\n- Includes metadata such as total record count, current page number, and page size to facilitate client-side data handling.\n- May return partial data if the dataset exceeds the maximum allowed records per request.\n- Handles cases where no records match the criteria by returning an empty list with appropriate metadata.\n\n**Implementation guidance**\n- Require explicit specification of the record type to ensure accurate data retrieval.\n- Validate and sanitize all input parameters (filters, sorting, pagination) to prevent malformed queries and optimize performance.\n- Implement robust pagination logic to allow clients to retrieve subsequent pages seamlessly.\n- Provide clear error messages and status codes for scenarios such as invalid parameters, unauthorized access, or record type not found.\n- Ensure compliance with NetSuite API rate limits and handle throttling gracefully.\n- Support common filter operators (e.g., equals, contains, greater than) consistent with NetSuite’s search capabilities.\n- Return consistent and well-structured response formats to facilitate client parsing and integration.\n\n**Examples**\n- Retrieving a list of customer records filtered by status (e.g., Active) and creation date range.\n- Fetching a batch of sales orders placed within a specific date range, sorted by order date descending.\n- Obtaining a paginated list of inventory items filtered by category and availability status.\n- Requesting the first 50 employee records with a specific job title, then fetching subsequent pages as needed.\n- Searching for vendor records containing a specific keyword in their name or description.\n\n**Important notes**\n- The maximum number of records returned per request is subject to NetSuite API limits, which may require multiple paginated requests for large datasets.\n- Proper authentication and authorization are mandatory to access the requested records; insufficient permissions will result in access errors.\n- The structure and fields of the returned records vary depending on the specified record type and the fields requested or defaulted"},"searchPreferences":{"type":"object","properties":{"bodyFieldsOnly":{"type":"boolean","description":"bodyFieldsOnly indicates whether the search results should include only the body fields of the records, excluding any joined or related record fields. This setting controls the scope of data returned by the search operation, allowing for more focused and efficient retrieval when only the main record's fields are necessary.\n\n**Field behavior**\n- When set to true, search results will include only the fields that belong directly to the main record (body fields), excluding any fields from joined or related records.\n- When set to false or omitted, search results may include fields from both the main record and any joined or related records specified in the search.\n- Directly affects the volume and detail of data returned, potentially reducing payload size and improving performance.\n- Influences how the search engine processes and compiles the result set, limiting it to primary record data when enabled.\n\n**Implementation guidance**\n- Use this property to optimize search performance and reduce data transfer when only the main record's fields are required.\n- Set to true to minimize payload size, which is beneficial for large datasets or bandwidth-sensitive environments.\n- Verify that your search criteria and downstream processing do not require any joined or related record fields before enabling this option.\n- If joined fields are necessary for your application logic, keep this property false or unset to ensure complete data retrieval.\n- Consider this setting in conjunction with other search preferences like pageSize and returnSearchColumns for optimal results.\n\n**Examples**\n- `bodyFieldsOnly: true` — returns only the main record’s body fields in search results, excluding any joined record fields.\n- `bodyFieldsOnly: false` — returns both body fields and fields from joined or related records as specified in the search.\n- Omitted `bodyFieldsOnly` property — defaults to false behavior, including joined fields if requested.\n\n**Important notes**\n- Enabling bodyFieldsOnly may omit critical related data if your search logic depends on joined fields, potentially impacting application functionality.\n- This setting is particularly useful for improving performance and reducing data size in scenarios where joined data is unnecessary.\n- Not all record types or search operations may support this preference; verify compatibility with your specific use case.\n- Changes to this setting can affect the structure and completeness of search results, so test thoroughly when modifying.\n\n**Dependency chain**\n- This property is part of the `searchPreferences` object within the NetSuite API request.\n- It influences the fields returned by the search operation, affecting both data scope and payload size.\n- May"},"pageSize":{"type":"number","description":"pageSize specifies the number of search results to be returned per page in a paginated search response. This property controls the size of each page of results when performing searches, enabling efficient handling and retrieval of large datasets by dividing them into manageable chunks. By adjusting pageSize, clients can balance between the volume of data received per request and the performance implications of processing large result sets.\n\n**Field behavior**\n- Determines the maximum number of records returned in a single page of search results.\n- Directly influences the pagination mechanism by setting how many items appear on each page.\n- Helps optimize network and client performance by limiting the amount of data transferred and processed per response.\n- Affects the total number of pages available, calculated based on the total number of search results divided by pageSize.\n- When pageSize is changed between requests, it may affect the consistency of pagination navigation.\n\n**Implementation guidance**\n- Assign pageSize a positive integer value that balances response payload size and system performance.\n- Ensure the value respects any minimum and maximum limits imposed by the API or backend system.\n- Maintain consistent pageSize values across paginated requests to provide predictable and stable navigation through result pages.\n- Consider client device capabilities, network bandwidth, and expected user interaction patterns when selecting pageSize.\n- Implement validation to prevent invalid or out-of-range values that could cause errors or degraded performance.\n- When dealing with very large datasets, consider smaller pageSize values to reduce memory consumption and improve responsiveness.\n\n**Examples**\n- pageSize: 25 — returns 25 search results per page, suitable for standard list views.\n- pageSize: 100 — returns 100 search results per page, useful for bulk data processing or export scenarios.\n- pageSize: 10 — returns 10 search results per page, ideal for quick previews or limited bandwidth environments.\n- pageSize: 50 — a moderate setting balancing data volume and performance for typical use cases.\n\n**Important notes**\n- Excessively large pageSize values can increase response times, memory usage, and may lead to timeouts or throttling.\n- Very small pageSize values can cause a high number of API calls, increasing overall latency and server load.\n- The API may enforce maximum allowable pageSize limits; requests exceeding these limits may result in errors or automatic truncation.\n- Changing pageSize mid-pagination can disrupt user experience by altering the number of pages and item offsets.\n- Some APIs may have default pageSize values if none is specified; explicitly setting page"},"returnSearchColumns":{"type":"boolean","description":"returnSearchColumns: Specifies whether the search operation should return the columns (fields) defined in the search results, providing detailed data for each record matching the search criteria.\n\n**Field behavior**\n- Determines if the search response includes the columns specified in the search definition, such as field values and metadata.\n- When set to true, the search results will contain detailed column data for each record, enabling comprehensive data retrieval.\n- When set to false, the search results will omit column data, potentially returning only record identifiers or minimal information.\n- Directly influences the amount of data returned, impacting response payload size and processing time.\n- Affects how client applications can utilize the search results, depending on the presence or absence of column data.\n\n**Implementation guidance**\n- Set to true when detailed search result data is required for processing, reporting, or display purposes.\n- Set to false to optimize performance and reduce bandwidth usage when only record IDs or minimal data are needed.\n- Use in conjunction with other search preference settings (e.g., `pageSize`, `returnSearchRows`) to fine-tune search responses.\n- Ensure client applications are designed to handle both scenarios—presence or absence of column data—to avoid errors or incomplete processing.\n- Consider the trade-off between data completeness and performance when configuring this property.\n\n**Examples**\n- `returnSearchColumns: true` — The search results will include all defined columns for each record, such as names, dates, and custom fields.\n- `returnSearchColumns: false` — The search results will exclude column data, returning only basic record information like internal IDs.\n\n**Important notes**\n- Enabling returnSearchColumns may significantly increase response size and processing time, especially for searches returning many records or columns.\n- Some search operations or API endpoints may require columns to be returned to function correctly or to provide meaningful results.\n- Disabling this option can improve performance but limits the detail available in search results, which may affect downstream processing or user interfaces.\n- Changes to this setting can impact caching, pagination, and sorting behaviors depending on the search implementation.\n\n**Dependency chain**\n- Related to other `searchPreferences` properties such as `pageSize` (controls number of records per page) and `returnSearchRows` (controls whether search rows are returned).\n- Works in tandem with search definition settings that specify which columns are included in the search.\n- May affect or be affected by API-level configurations or limitations on data retrieval and response formatting."}},"description":"searchPreferences: Preferences that control the behavior and parameters of search operations within the NetSuite environment, enabling customization of how search queries are executed and how results are returned to optimize relevance, performance, and user experience.\n\n**Field behavior**\n- Defines the execution parameters for search queries, including pagination, sorting, filtering, and result formatting.\n- Controls the scope, depth, and granularity of data retrieved during search operations.\n- Influences the performance, accuracy, and relevance of search results based on configured preferences.\n- Can be adjusted dynamically to tailor search behavior to specific user roles, contexts, or application requirements.\n- May include settings such as page size limits, sorting criteria, case sensitivity, and filter application.\n\n**Implementation guidance**\n- Utilize this property to fine-tune search operations to meet specific user or application needs, improving efficiency and relevance.\n- Validate all preference values against supported NetSuite search parameters to prevent errors or unexpected behavior.\n- Establish sensible default preferences to ensure consistent and predictable search results when explicit preferences are not provided.\n- Allow dynamic updates to preferences to adapt to changing contexts, such as different user roles or data volumes.\n- Ensure that preference configurations comply with user permissions and role-based access controls to maintain security and data integrity.\n\n**Examples**\n- Setting a page size of 50 to limit the number of records returned per search query for better performance.\n- Enabling case-insensitive search filters to broaden result matching.\n- Specifying sorting order by transaction date in descending order to show the most recent records first.\n- Applying filters to restrict search results to a particular customer segment or date range.\n- Configuring search to exclude inactive records to streamline results.\n\n**Important notes**\n- Misconfiguration of searchPreferences can lead to incomplete, irrelevant, or inefficient search results, negatively impacting user experience.\n- Certain preferences may be restricted or overridden based on user roles, permissions, or API version constraints.\n- Changes to searchPreferences can affect system performance; excessive page sizes or complex filters may increase load times.\n- Always verify compatibility of preference settings with the specific NetSuite API version and environment in use.\n- Consider the impact of preferences on downstream processes that consume search results.\n\n**Dependency chain**\n- Depends on the overall search operation configuration and the specific search type being performed.\n- Interacts with user authentication and authorization settings to enforce access controls on search results.\n- Influences and is influenced by data retrieval mechanisms and indexing strategies within NetSuite.\n- Works in conjunction with"},"file":{"type":"object","description":"Configuration for retrieving files from NetSuite file cabinet and PARSING them into records. Use this for structured file exports (CSV, XML, JSON) where the file content should be parsed into data records.\n\n**Critical:** When to use file vs blob\n- Use `netsuite.file` WITH export `type: null/undefined` for file exports WITH parsing (CSV, XML, JSON)\n- Use `netsuite.blob` WITH export `type: \"blob\"` for raw binary transfers WITHOUT parsing\n\nWhen you want file content to be parsed into individual records, use this `file` configuration and leave the export's `type` field as null or undefined (standard export). Do NOT set `type: \"blob\"` when using this configuration.","properties":{"folderInternalId":{"type":"string","description":"The internal ID of the NetSuite File Cabinet folder from which files will be exported.\n\nSpecify the internal ID for the NetSuite File Cabinet folder from which you want to export your files. If the folder internal ID is required to be dynamic based on the data you are integrating, you can specify the JSON path to the field in your data containing the folder internal ID values instead. For example, {{{myFileField.fileName}}}.\n\n**Field behavior**\n- Identifies the specific folder in NetSuite's file cabinet to export files from\n- Must be a valid internal ID that exists in the NetSuite environment\n- Supports dynamic values using handlebars notation for data-driven folder selection\n- The internal ID is distinct from folder names or paths; it's a stable numeric identifier\n\n**Implementation guidance**\n- Obtain the folderInternalId via NetSuite's UI (File Cabinet > folder properties) or API\n- For static exports, use the numeric internal ID directly (e.g., \"12345\")\n- For dynamic exports, use handlebars syntax to reference a field in your data\n- Verify folder permissions - the integration user must have access to the folder\n- The folderInternalId is distinct from folder names or folder paths; it is a stable numeric identifier used internally by NetSuite.\n- Using an incorrect or non-existent folderInternalId will result in errors or unintended file placement.\n- Folder permissions and user roles can impact the ability to perform operations even with a valid folderInternalId.\n- Folder hierarchy changes do not affect the folderInternalId, ensuring persistent reference integrity.\n\n**Dependency chain**\n- Depends on the existence of the folder within the NetSuite file cabinet.\n- Requires appropriate user permissions to access or modify the folder.\n- Often used in conjunction with file identifiers and other file metadata fields.\n- May be linked to folder creation or folder search operations to retrieve valid IDs.\n\n**Examples**\n- \"12345\" - Static folder internal ID\n- \"67890\" - Another valid folder internal ID\n- \"{{{record.folderId}}}\" - Dynamic folder ID from integration data\n- \"{{{myFileField.fileName}}}\" - Dynamic value from a field in your data\n\n**Important notes**\n- Using an incorrect or non-existent folderInternalId will result in errors\n- Folder hierarchy changes do not affect the folderInternalId\n- Internal IDs may differ between non-production and production environments"},"backupFolderInternalId":{"type":"string","description":"backupFolderInternalId is the internal identifier of the backup folder within the NetSuite file cabinet where backup files are stored. This ID uniquely identifies the folder location used for saving backup files programmatically, ensuring that backup operations target the correct directory within the NetSuite environment.\n\n**Field behavior**\n- Represents a unique internal ID assigned by NetSuite to a specific folder in the file cabinet.\n- Directs backup operations to the designated folder location for storing backup files.\n- Must correspond to an existing and accessible folder within the NetSuite file cabinet.\n- Typically handled as an integer value in API requests and responses.\n- Immutable for a given folder; changing the folder requires updating this ID accordingly.\n\n**Implementation guidance**\n- Verify that the folder with this internal ID exists before initiating backup operations.\n- Confirm that the folder has the necessary permissions to allow writing and managing backup files.\n- Use NetSuite SuiteScript APIs or REST API calls to retrieve and validate folder internal IDs dynamically.\n- Avoid hardcoding the internal ID; instead, use configuration files, environment variables, or administrative settings to maintain flexibility across environments.\n- Implement error handling to manage cases where the folder ID is invalid, missing, or inaccessible.\n- Consider environment-specific IDs for non-production versus production to prevent misdirected backups.\n\n**Examples**\n- 12345\n- 67890\n- 112233\n\n**Important notes**\n- The internal ID is unique per NetSuite account and environment; IDs do not transfer between non-production and production.\n- Deleting or renaming the folder associated with this ID will disrupt backup processes until updated.\n- Proper access rights and permissions are mandatory to write backup files to the specified folder.\n- Changes to folder structure or permissions should be coordinated with backup scheduling to avoid failures.\n- This property is critical for ensuring backup data integrity and recoverability within NetSuite.\n\n**Dependency chain**\n- Depends on the existence and accessibility of the folder in the NetSuite file cabinet.\n- Interacts with backup scheduling, file naming conventions, and storage management properties.\n- May be linked with authentication and authorization mechanisms controlling file cabinet access.\n- Relies on NetSuite API capabilities to manage and reference file cabinet folders.\n\n**Technical details**\n- Data type: Integer (typically a 32-bit integer).\n- Represents the internal NetSuite folder ID, not the folder name or path.\n- Used in API payloads to specify backup destination folder.\n- Must be retrieved or confirmed via NetSuite SuiteScript"}}}},"required":[]},"RDBMS":{"type":"object","description":"Configuration object for Relational Database Management System (RDBMS) data integration exports.\n\nThis object is REQUIRED when the _connectionId field references an RDBMS database connection\nand must not be included for other connection types. It defines how data is extracted\nfrom relational databases using SQL queries.\n\n**Rdbms export capabilities**\n- Execute custom SQL SELECT statements\n- Support for joins, aggregations, and functions\n- Flexible data retrieval from any accessible tables or views\n- Compatible with all major database systems\n\n**Critical:** WHAT BELONGS IN THIS OBJECT\n- `query` - **ALWAYS REQUIRED** - The SQL SELECT statement\n- `once` - **REQUIRED** when the export's Object Type is `\"once\"` (set _include_once: true)\n- **DO NOT** put `delta` inside this object - delta is handled via the query\n\n**Delta exports (type: \"delta\")**\nFor delta/incremental exports, do NOT populate a `delta` object inside `rdbms`.\nInstead, use `{{lastExportDateTime}}` or `{{currentExportDateTime}}` directly in the query:\n```json\n{\n  \"type\": \"delta\",\n  \"rdbms\": {\n    \"query\": \"SELECT * FROM customers WHERE updatedAt > {{lastExportDateTime}}\"\n  }\n}\n```\n\n**Once exports (type: \"once\")**\nFor once exports (mark records as processed), populate `rdbms.once.query`:\n```json\n{\n  \"type\": \"once\",\n  \"rdbms\": {\n    \"query\": \"SELECT * FROM customers WHERE exported = false\",\n    \"once\": {\n      \"query\": \"UPDATE customers SET exported = true WHERE id = {{record.id}}\"\n    }\n  }\n}\n```\n\n**Standard exports (type: null or not specified)**\nJust provide the query:\n```json\n{\n  \"rdbms\": {\n    \"query\": \"SELECT * FROM customers WHERE status = 'ACTIVE'\"\n  }\n}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - SQL SELECT query for retrieving data from the database.\n\nThis field contains the SQL SELECT statement that will be executed to fetch data\nfrom the database. The query can range from simple table selections to complex\njoins and aggregations.\n\nExamples:\n- Basic: `SELECT id, name, email FROM customers WHERE status = 'ACTIVE'`\n- Join: `SELECT o.id, c.name, o.amount FROM orders o JOIN customers c ON o.customer_id = c.id`\n- Aggregate: `SELECT category, COUNT(*) as count FROM orders GROUP BY category`\n- Parameterized: `SELECT * FROM orders WHERE customer_id = {{record.customer_id}}`\n"},"once":{"type":"object","description":"**CRITICAL: REQUIRED when the export's Object Type is \"once\".**\n\nIf Object Type is \"once\", you MUST set _include_once to true (or include this object).\n\nThis object has ONLY ONE property: \"query\" (a SQL UPDATE string).\nDO NOT create any other properties like \"update\", \"table\", \"set\", \"where\", etc.\n\nCORRECT format:\n```json\n{\"query\": \"UPDATE customers SET exported=true WHERE id={{record.id}}\"}\n```\n\nWRONG format (DO NOT DO THIS):\n```json\n{\"update\": {\"table\": \"customers\", \"set\": {...}}}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - A SQL UPDATE statement string to mark records as processed.\n\nThis is a plain SQL UPDATE query string, NOT a structured object.\n\nThe query runs AFTER each record is successfully exported, setting a flag\nto indicate the record has been processed.\n\nFormat: \"UPDATE <table> SET <column>=<value> WHERE <id_column>={{record.<id_field>}}\"\n\nExample: \"UPDATE customers SET exported=true WHERE id={{record.id}}\"\n\nThe {{record.id}} placeholder is replaced with the actual record ID from each exported row.\n"}}}}},"S3":{"type":"object","description":"Configuration object for Amazon S3 (Simple Storage Service) data integration exports.\n\nThis object is REQUIRED when the _connectionId field references an AWS S3 connection\nand must not be included for other connection types. It defines how files are retrieved\nfrom S3 buckets for processing in integrations.\n\nThe S3 export object has the following requirements:\n\n- Required fields: region, bucket\n- Optional fields: keyStartsWith, keyEndsWith, backupBucket, keyPrefix\n\n**Purpose**\n\nThis configuration specifies:\n- Which S3 bucket to retrieve files from\n- How to filter files by key patterns\n- Where to move files after retrieval (optional)\n","properties":{"region":{"type":"string","description":"The AWS region where the S3 bucket is located.\n\n- REQUIRED for all S3 exports\n- Must be a valid AWS region identifier (e.g., us-east-1, eu-west-1)\n- Case-insensitive (will be normalized to lowercase)\n"},"bucket":{"type":"string","description":"The S3 bucket name to retrieve files from.\n\n- REQUIRED for all S3 exports\n- Must be a valid existing S3 bucket name\n- Globally unique across all AWS accounts\n- AWS credentials must have s3:ListBucket and s3:GetObject permissions\n"},"keyStartsWith":{"type":"string","description":"Optional prefix filter for S3 object keys.\n\n- Filters files based on the beginning of their keys\n- Functions as a directory path in S3's flat storage structure\n- Case-sensitive (S3 keys are case-sensitive)\n- Examples:\n  - `\"exports/\"` - retrieves files in the exports \"directory\"\n  - `\"customer/orders/2023/\"` - retrieves files in this nested path\n  - `\"invoice_\"` - retrieves files starting with \"invoice_\"\n\nWhen used with keyEndsWith, files must match both criteria.\n"},"keyEndsWith":{"type":"string","description":"Optional suffix filter for S3 object keys.\n\n- Commonly used to filter by file extension\n- Case-sensitive (S3 keys are case-sensitive)\n- Examples:\n  - `\".csv\"` - retrieves only CSV files\n  - `\"_FINAL.xml\"` - retrieves only XML files with _FINAL suffix\n  - `\"_READY\"` - retrieves files with status indicator\n\nWhen used with keyStartsWith, files must match both criteria.\n"},"backupBucket":{"type":"string","description":"Optional destination bucket where files are moved before deletion.\n\n- If omitted, files are deleted from the source bucket after successful export\n- Must be a valid existing S3 bucket in the same region\n- AWS credentials must have s3:PutObject permissions on this bucket\n- Provides an independent backup of exported files\n\nIMPORTANT: Celigo automatically deletes files from the source bucket after\nsuccessful export. The backup bucket is for users who want to maintain their\nown independent backup of exported files. Celigo also maintains its own backup\nof processed files for a set period of time.\n"},"keyPrefix":{"type":"string","description":"Optional prefix to prepend to keys when moving to backup bucket.\n\n- Used only when backupBucket is specified\n- Prepended to the original filename when moved to backup\n- Can contain static text or handlebars templates\n- Examples:\n  - `\"processed/\"` - places files under a processed folder\n  - `\"archive/{{date 'YYYY-MM-DD'}}/\"` - organizes by date\n\nIMPORTANT: The original file's directory structure is not preserved. Only the\nfilename is appended to this prefix in the backup location.\n"}}},"Wrapper":{"type":"object","description":"Configuration for Wrapper exports","properties":{"function":{"type":"string","description":"Function name to invoke in the wrapper export"},"configuration":{"type":"object","description":"Wrapper-specific configuration payload","additionalProperties":true}}},"Parsers":{"type":"array","description":"Configuration for parsing XML payloads (i.e. files, HTTP responses, etc.). Use this field when you need to process XML data\nand transform it into a JSON records.\n\n**Implementation notes**\n\n- This is where you configure how to parse XML data in your resource\n- Although defined as an array, you typically only need a single parser configuration\n- Currently only XML parsing is supported\n- Only configure this field when working with XML data that needs structured parsing\n","items":{"type":"object","properties":{"version":{"type":"string","description":"Version identifier for the parser configuration format. Currently only version \"1\" is supported.\n\nAlways set this field to \"1\" as it's the only supported version at this time.\n","enum":["1"]},"type":{"type":"string","description":"Defines the type of parser to use. Currently only \"xml\" is supported.\n\nWhile the system is designed to potentially support multiple parser types in the future,\nat this time only XML parsing is implemented, so this field must be set to \"xml\".\n","enum":["xml"]},"name":{"type":"string","description":"Optional identifier for the parser configuration. This field is primarily for documentation\npurposes and is not functionally used by the system.\n\nThis field can be omitted in most cases as it's not required for parser functionality.\n"},"rules":{"type":"object","description":"Configuration rules that determine how XML data is parsed and converted to JSON.\nThese settings control the structure and format of the resulting JSON records.\n\n**Parsing options**\n\nThere are two main parsing strategies available:\n- **Automatic parsing**: Simple but produces more complex output\n- **Custom parsing**: More control over the resulting JSON structure\n","properties":{"V0_json":{"type":"boolean","description":"Controls the XML parsing strategy.\n\n- When set to **true** (Automatic): XML data is automatically converted to JSON without\n  additional configuration. This is simpler to set up but typically produces more complex\n  and deeply nested JSON that may be harder to work with.\n\n- When set to **false** (Custom): Gives you more control over how the XML is converted to JSON.\n  This requires additional configuration (like listNodes) but produces cleaner, more\n  predictable JSON output.\n\nMost implementations use the Custom approach (false) for better control over the output format.\n"},"listNodes":{"type":"array","description":"Specifies which XML nodes should be treated as arrays (lists) in the output JSON.\n\nIt's not always possible to automatically determine if an XML node should be a single value\nor an array. Use this field to explicitly identify nodes that should be treated as arrays,\neven if they appear only once in the XML.\n\nEach entry should be a simplified XPath expression pointing to the node that should be\ntreated as an array.\n\nOnly relevant when V0_json is set to false (Custom parsing).\n","items":{"type":"string"}},"includeNodes":{"type":"array","description":"Limits which XML nodes are included in the output JSON.\n\nFor large XML documents, you can use this field to extract only the nodes you need,\nreducing the size and complexity of the resulting JSON. Only nodes specified here\n(and their children) will be included in the output.\n\nEach entry should be a simplified XPath expression pointing to nodes to include.\n\nOnly relevant when V0_json is set to false (Custom parsing).\n","items":{"type":"string"}},"excludeNodes":{"type":"array","description":"Specifies which XML nodes should be excluded from the output JSON.\n\nSometimes it's easier to specify which nodes to exclude rather than which to include.\nUse this field to identify nodes that should be omitted from the output JSON.\n\nEach entry should be a simplified XPath expression pointing to nodes to exclude.\n\nOnly relevant when V0_json is set to false (Custom parsing).\n","items":{"type":"string"}},"stripNewLineChars":{"type":"boolean","description":"Controls whether newline characters are removed from text values.\n\nWhen set to true, all newline characters (\\n, \\r, etc.) will be removed from\ntext content in the XML before conversion to JSON.\n","default":false},"trimSpaces":{"type":"boolean","description":"Controls whether leading and trailing whitespace is trimmed from text values.\n\nWhen set to true, all values will have leading and trailing whitespace removed\nbefore conversion to JSON.\n","default":false},"attributePrefix":{"type":"string","description":"Specifies a character sequence to prepend to XML attribute names when converted to JSON properties.\n\nIn XML, both elements and attributes can exist at the same level, but in JSON this distinction is lost.\nTo maintain the distinction between element data and attribute data in the resulting JSON, this prefix\nis added to attribute names during conversion.\n\nFor example, with attributePrefix set to \"Att-\" and an XML element like:\n```xml\n<product id=\"123\">Laptop</product>\n```\n\nThe resulting JSON would be:\n```json\n{\n  \"product\": \"Laptop\",\n  \"Att-id\": \"123\"\n}\n```\n\nThis helps maintain the distinction between element content and attribute values in the\nconverted JSON, making it easier to reference specific data in downstream processing steps.\n"},"textNodeName":{"type":"string","description":"Specifies the property name to use for element text content when an element has both\ntext content and child elements or attributes.\n\nWhen an XML element contains both text content and other nested elements or attributes,\nthis field determines what property name will hold the text content in the resulting JSON.\n\nFor example, with textNodeName set to \"value\" and an XML element like:\n```xml\n<item id=\"123\">\n  Laptop\n  <category>Electronics</category>\n</item>\n```\n\nThe resulting JSON would be:\n```json\n{\n  \"item\": {\n    \"value\": \"Laptop\",\n    \"category\": \"Electronics\",\n    \"id\": \"123\"\n  }\n}\n```\n\nThis allows for unambiguous parsing of complex XML structures that mix text content with\nchild elements. Choose a name that's unlikely to conflict with actual element names in your XML.\n"}}}}}},"Mappings":{"type":"array","description":"Array of field mapping configurations for transforming data from one format into another.\n\n**Guidance**\n\nThis schema is designed around RECURSION as its core architectural principle. Understanding this recursive\nnature is essential for building effective mappings:\n\n1. The schema is self-referential by design - a mapping can contain nested mappings of the same structure\n2. Complex data structures (nested objects, arrays of objects, arrays of arrays of objects) are ALL\n   handled through this recursive pattern\n3. Each mapping handles one level of the data structure; deeper levels are handled by nested mappings\n\nWhen generating mappings programmatically:\n- For simple fields (string, number, boolean): Create single mapping objects\n- For objects: Create a parent mapping with nested 'mappings' array containing child field mappings\n- For arrays: Use 'buildArrayHelper' with extract paths defining array inputs and\n  recursive 'mappings' to define object structures\n\nThe system will process these nested structures recursively during runtime, ensuring proper construction\nof complex hierarchical data while maintaining excellent performance.\n","items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}}},"items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}},"Transform":{"type":"object","description":"Configuration for transforming data during processing operations. This object enables\nreshaping of records.\n\n**Transformation capabilities**\n\nCeligo's transformation engine offers powerful features for data manipulation:\n- Precise field mapping with JSONPath expressions\n- Support for any level of nested arrays\n- Formula-based field value generation\n- Dynamic references to flow and integration settings\n\n**Implementation approaches**\n\nThere are two distinct transformation mechanisms available:\n\n**Rule-Based Transformation (`type: \"expression\"`)**\n- **Best For**: Most transformation scenarios from simple to complex\n- **Capabilities**: Field mapping, formula calculations, lookups, nested data handling\n- **Advantages**: Visual configuration, no coding required, intuitive interface\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear mapping requirements or need to reshape data structure\n\n**Script-Based Transformation (`type: \"script\"`)**\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Capabilities**: Full programmatic control, custom processing, complex business rules\n- **Advantages**: Maximum flexibility, can implement any transformation logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Visual transformation tools aren't sufficient for your use case\n","properties":{"type":{"type":"string","description":"Determines which transformation mechanism to use. This choice affects which properties\nmust be configured and how transformation logic is implemented.\n\n**Available types**\n\n**Rule-Based Transformation (`\"expression\"`)**\n- **Required Config**: The `expression` object with mapping definitions\n- **Behavior**: Applies declarative rules to reshape data\n- **Best For**: Most transformation scenarios from simple to complex\n- **Advantages**: Visual configuration, no coding required\n\n**Script-Based Transformation (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to transform data\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard data transformations, use `\"expression\"`\n2. For complex logic or specialized processing, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based transformations. This object enables reshaping data\nwithout requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define transformation rules that\ncan map, modify, and generate data elements.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Field mappings define how input data is transformed to target fields\n- Formulas can be used to calculate or generate new values\n- Lookups can enrich data by fetching related information\n- Mode determines how records are processed (create new or modify existing)\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"2\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"2\"\nfor current implementations.\n","enum":["2"]},"rulesTwoDotZero":{"type":"object","description":"Configuration for version 2 transformation rules. This object contains the core logic\nfor how data is mapped, enriched, and transformed.\n\n**Capabilities**\n\nTransformation 2.0 provides:\n- Precise field mapping with JSONPath expressions\n- Support for deeply nested data structures\n- Formula-based field generation\n- Dynamic lookups for data enrichment\n- Multiple operating modes to fit different scenarios\n","properties":{"mode":{"type":"string","description":"Transformation mode that determines how records are handled during processing.\n\n**Available modes**\n\n**Create Mode (`\"create\"`)**\n- **Behavior**: Builds entirely new output records from inputs\n- **Use When**: Output structure differs significantly from input\n- **Advantage**: Clean slate approach, no field inheritance\n\n**Modify Mode (`\"modify\"`)**\n- **Behavior**: Makes targeted edits to existing records\n- **Use When**: Output structure should remain similar to input\n- **Advantage**: Preserves unmapped fields from the original record\n","enum":["create","modify"]},"mappings":{"$ref":"#/components/schemas/Mappings"},"lookups":{"allOf":[{"description":"Shared lookup tables used across all mappings defined in the transformation rules.\n\n**Purpose**\n\nLookups provide centralized value translation that can be referenced from any mapping\nin your transformation configuration. They enable consistent translation of codes, IDs,\nand values between systems without duplicating translation logic.\n\n**Usage in transformations**\n\nLookups are particularly valuable in transformations for:\n\n- **Data Normalization**: Standardizing values from diverse source systems\n- **Code Translation**: Converting between different coding systems (e.g., status codes)\n- **Field Enrichment**: Adding descriptive values based on ID or code lookups\n- **Cross-Reference Resolution**: Mapping identifiers between integrated systems\n\n**Implementation**\n\nLookups are defined once in this array and referenced by name in mappings:\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"statusMapping\",\n    \"map\": {\n      \"A\": \"Active\",\n      \"I\": \"Inactive\",\n      \"P\": \"Pending\"\n    },\n    \"default\": \"Unknown Status\"\n  }\n]\n```\n\nThen referenced in mappings using the lookupName property:\n\n```json\n{\n  \"generate\": \"status\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.statusCode\",\n  \"lookupName\": \"statusMapping\"\n}\n```\n\nThe system automatically applies the lookup during transformation processing.\n\nFor complete details on lookup properties and behavior, see the Lookups schema.\n"},{"$ref":"#/components/schemas/Lookups"}]}}}}},"script":{"type":"object","description":"Configuration for programmable script-based transformations. This object enables complex, custom\ntransformation logic beyond what expression-based transformations can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to transform data according to\nspecialized business rules or complex algorithms.\n\n**Implementation approach**\n\nScript-based transformation works by:\n1. Executing the specified function from the referenced script\n2. Passing input data to the function\n3. Using the function's return value as the transformed output\n\n**Common use cases**\n\nScript transformation is ideal for:\n- Complex business logic that can't be expressed through mappings\n- Algorithmic transformations requiring computation\n- Dynamic transformations based on external factors\n- Legacy system data format compatibility\n- Multi-stage processing with intermediate steps\n\nOnly use script-based transformation when expression-based transformation is insufficient.\nScript transformation requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the transformation logic.\n\nThe referenced script should contain the function specified in the\n'function' property.\n","format":"objectid"},"function":{"type":"string","description":"Name of the function within the script to execute for transformation. This function\nmust exist in the script referenced by _scriptId.\n"}}}}},"Lookups":{"type":"array","description":"Configuration for value-to-value transformations using lookup tables.\n\n**Purpose**\n\nLookups provide a way to translate values from one system to another. They transform\ninput values into output values using either static mapping tables or\ndynamic lookup caches.\n\n**Lookup mechanisms**\n\nThere are two distinct lookup mechanisms available:\n\n1. **Static Lookups**: Define a simple key-value map object and store it as part of your resource\n   - Best for: Small, fixed sets of values that rarely change\n   - Implementation: Configure the `map` object with input-to-output value mappings\n   - Example: Country codes, status values, simple translations\n\n2. **Dynamic Lookups**: Reference an existing 'Lookup Cache' resource in your Celigo account\n   - Best for: Large datasets, frequently changing values, or complex reference data\n   - Implementation: Configure `_lookupCacheId` to reference cached data maintained independently\n   - Example: Product catalogs, customer databases, pricing information\n\n**Property usage**\n\nThere are two mutually exclusive ways to configure lookups, depending on which mechanism you choose:\n\n1. **For Static Mappings**: Configure the `map` property with a direct key-value object\n   ```json\n   \"map\": {\"US\": \"United States\", \"CA\": \"Canada\"}\n   ```\n\n2. **For Dynamic Lookups**: Configure the following properties:\n   - `_lookupCacheId`: Reference to the lookup cache resource\n   - `extract`: JSON path to extract specific value from the returned lookup object\n\n**When to use**\n\nLookups are ideal for:\n\n1. **Value Translation**: Mapping codes or IDs to human-readable values\n\n2. **Data Enrichment**: Adding related information to records during processing\n\n3. **Normalization**: Ensuring consistent formatting of values across systems\n\n**Implementation details**\n\nLookups can be referenced in:\n\n1. **Field Mappings**: Direct use in field transformation configurations\n\n2. **Handlebars Templates**: Use within templates with the syntax:\n   ```\n   {{lookup 'lookupName' record.fieldName}}\n   ```\n\n**Example usage**\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"countryCodeToName\",\n    \"map\": {\n      \"US\": \"United States\",\n      \"CA\": \"Canada\",\n      \"UK\": \"United Kingdom\"\n    },\n    \"default\": \"Unknown Country\",\n    \"allowFailures\": true\n  },\n  {\n    \"name\": \"productDetails\",\n    \"_lookupCacheId\": \"60a2c4e6f321d800129a1a3c\",\n    \"extract\": \"$.details.price\",\n    \"allowFailures\": false\n  }\n]\n```\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique identifier for the lookup table within this configuration.\n\nThis name must be unique within the scope where the lookup is defined and is used to reference\nthe lookup in handlebars templates with the syntax {{lookup 'name' value}}.\n\nChoose descriptive names that indicate the transformation purpose, such as:\n- \"countryCodeToName\" for country code to full name conversion\n- \"statusMapping\" for status code translations\n- \"departmentCodes\" for department code to name mapping\n"},"map":{"type":"object","description":"The lookup mapping table as key-value pairs.\n\nThis object contains the input values as keys and their corresponding\noutput values. When a input value matches a key in this object,\nit will be replaced with the corresponding value.\n\nThe map should be kept to a reasonable size (typically under 100 entries)\nfor optimal performance. For larger mapping requirements, consider using\ndynamic lookups instead.\n\nMaps can include:\n- Simple code to name conversions: {\"US\": \"United States\"}\n- Status transformations: {\"A\": \"Active\", \"I\": \"Inactive\"}\n- ID to name mappings: {\"100\": \"Marketing\", \"200\": \"Sales\"}\n\nValues can be strings, numbers, or booleans, but all are stored as strings\nin the configuration.\n"},"_lookupCacheId":{"type":"string","description":"Reference to a LookupCache resource that contains the reference data for the lookup.\n\n**Purpose**\n\nThis field connects the lookup to an external data source that has been cached in the system.\nUnlike static lookups that use the `map` property, dynamic lookups can reference large datasets\nor frequently changing information without requiring constant updates to the integration.\n\n**Implementation details**\n\nThe LookupCache resource referenced by this ID contains:\n- The data records to be used as a reference source\n- Configuration for how the data should be indexed and accessed\n- Caching parameters to balance performance with data freshness\n\n**Usage patterns**\n\nCommonly used to reference:\n- Product catalogs or SKU databases\n- Customer or account information\n- Pricing tables or discount rules\n- Complex business logic lookup tables\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n","format":"objectid"},"extract":{"type":"string","description":"JSON path expression that extracts a specific value from the cached lookup object.\n\n**Purpose**\n\nWhen using dynamic lookups with a LookupCache, this JSON path identifies which field to extract\nfrom the cached object after it has been retrieved using the lookup key.\n\n**Implementation details**\n\n- Must use JSON path syntax (similar to mapping extract fields)\n- Operates on the cached object returned by the lookup operation\n- Examples:\n  - \"$.name\" - Extract the name field from the top level\n  - \"$.details.price\" - Extract a nested price field\n  - \"$.attributes[0].value\" - Extract a value from the first element of an array\n\n**Usage scenario**\n\nWhen a lookup cache contains complex objects:\n```json\n// Cache entry for key \"PROD-123\":\n{\n  \"id\": \"PROD-123\",\n  \"name\": \"Premium Widget\",\n  \"details\": {\n    \"price\": 99.99,\n    \"currency\": \"USD\",\n    \"inStock\": true\n  }\n}\n```\n\nSetting extract to \"$.details.price\" would return 99.99 as the lookup result.\n\nIf no extract is provided, the entire cached object is returned as the lookup result.\n"},"default":{"type":"string","description":"Default value to use when the source value is not found in the lookup map.\n\nThis value is used as a fallback when:\n1. The source value doesn't match any key in the map\n2. allowFailures is set to true\n\nSetting an appropriate default helps prevent flow failures due to unexpected\nvalues and provides predictable behavior for edge cases.\n\nCommon default patterns include:\n- Descriptive unknowns: \"Unknown Country\", \"Unspecified Status\"\n- Original value indicators: \"{Original Value}\", \"No mapping found\"\n- Neutral values: \"Other\", \"N/A\", \"Miscellaneous\"\n\nIf allowFailures is false and no default is specified, the flow will fail\nwhen encountering unmapped values.\n"},"allowFailures":{"type":"boolean","description":"When true, missing lookup values will use the default value rather than causing an error.\n\n**Behavior control**\n\nThis field determines how the system handles source values that don't exist in the map:\n\n- true: Use the default value for missing mappings and continue processing\n- false: Treat missing mappings as errors, failing the record\n\n**Recommendation**\n\nSet this to true when:\n- New source values might appear over time\n- Data quality issues could introduce unexpected values\n- Processing should continue even with imperfect mapping\n\nSet this to false when:\n- Complete data accuracy is critical\n- All possible source values are known and controlled\n- Missing mappings indicate serious data problems that should be addressed\n\nThe best practice is typically to set allowFailures to true with a meaningful\ndefault value, so flows remain operational while alerting you to missing mappings.\n"}}}},"Form":{"type":"object","description":"Configuration for creating user-friendly settings forms that make it easier for less technical users\nto configure integration resources.\n\n**Settings form builder**\n\nThe Settings Form Builder allows you to create or edit user-friendly fields that prompt for text entry\nor selections that will be returned as settings applied to this resource. Your forms can include any\nfield types that you see elsewhere in integrator.io, such as:\n\n- Text fields\n- Dropdown selections\n- Checkboxes\n- Radio buttons\n- Date pickers\n- Multi-select fields\n- Search fields\n\nForm fields make it much easier for less technical users to work with your integration settings by:\n\n- Providing clear labels and help text\n- Enforcing validation rules\n- Offering pre-defined selection options\n- Grouping related settings logically\n- Supporting conditional visibility\n- Creating a consistent user experience\n","properties":{"form":{"type":"object","description":"Configuration that defines the structure, fields, and behavior of the settings form.\n\nThis object contains the complete definition of the form's layout, fields, validation rules,\nand interactive behaviors. The specific structure depends on the form complexity and can include\nfield definitions, sections, conditional display logic, and default values.\n\nThe form configuration is typically created and managed through the visual Form Builder interface\nrather than edited directly as JSON.\n","properties":{"fieldMap":{"type":"object","description":"A mapping of field identifiers to their configuration objects.\nEach key in this object represents a unique field ID, and the value contains\nall the configuration settings for that specific form field.\n","additionalProperties":{"type":"object","description":"Configuration for an individual form field.\n","properties":{"id":{"type":"string","description":"Unique identifier for this field within the form.\nThis value typically matches the key in the fieldMap object.\n"},"name":{"type":"string","description":"Name of the field, used as the property name when generating the settings object\nfrom the submitted form data.\n"},"type":{"type":"string","description":"The type of form control to render for this field.\n","enum":["text","checkbox","radiogroup","relativeuri","editor","keyvalue","select","multiselect","toggle","datetime","date"]},"label":{"type":"string","description":"Display label shown next to the field in the form.\n"},"description":{"type":"string","description":"Detailed explanation text that appears below the field, providing more context\nthan the label or helpText.\n"},"helpText":{"type":"string","description":"Explanatory text that appears when hovering over the help icon next to the field.\nUsed to provide additional guidance on how to use the field.\n"},"required":{"type":"boolean","description":"When true, the field must have a value before the form can be submitted.\n","default":false},"multiline":{"type":"boolean","description":"For text fields, determines whether the input should be a multi-line text area\ninstead of a single-line input.\n","default":false},"rowsMax":{"type":"integer","description":"For multiline text fields, specifies the maximum number of visible rows.\n"},"inputType":{"type":"string","description":"For text fields, specifies the HTML input type attribute to apply additional\nvalidation or specialized input behavior.\n","enum":["text","number","email","password","tel","url"]},"delimiter":{"type":"string","description":"For text fields, specifies a character to use for splitting the input into an array.\nUsed for collecting multiple values in a single text field.\n"},"mode":{"type":"string","description":"For editor fields, specifies the type of content being edited for syntax highlighting.\n","enum":["json","xml","csv","text"]},"keyName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the key input.\n"},"valueName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the value input.\n"},"showDelete":{"type":"boolean","description":"For keyvalue fields, determines whether to show a delete button for each key-value pair.\n"},"doNotAllowFutureDates":{"type":"boolean","description":"For date and datetime fields, restricts selection to dates not in the future.\n"},"skipTimezoneConversion":{"type":"boolean","description":"For datetime fields, prevents automatic timezone conversion of the date value.\n"},"options":{"type":"array","description":"For fields that present choices (select, multiselect, radiogroup, toggle), defines\nthe available options.\n","items":{"oneOf":[{"title":"Option group","type":"object","properties":{"items":{"type":"array","items":{"oneOf":[{"title":"String value","type":"string"},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]},"description":"Array of option values/labels to display in the selection control.\n"}}},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]}},"visibleWhen":{"type":"array","description":"Conditional display rules that determine when this field should be visible.\nIf empty or not provided, the field is always visible.\n","items":{"type":"object","properties":{"field":{"type":"string","description":"The ID of another field whose value controls the visibility of this field.\n"},"is":{"type":"array","items":{"type":"string"},"description":"Array of values - if the referenced field has any of these values,\nthis field will be visible.\n"}}}}}}},"layout":{"type":"object","description":"Defines how the form fields are arranged and grouped in the UI.\nThe layout can organize fields into columns, sections, or other visual groupings.\n","properties":{"type":{"type":"string","description":"The type of layout to use for the form.\n","enum":["column"]},"containers":{"type":"array","description":"Array of container objects that group fields or contain nested containers.\nEach container can represent a column, box, indented section, or collapsible section.\n","items":{"type":"object","properties":{"type":{"type":"string","description":"The visual style of the container.\n","enum":["indent","box","collapse"]},"label":{"type":"string","description":"The heading text displayed for this container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this container.\nEach ID must correspond to a key in the fieldMap object.\n"},"containers":{"type":"array","description":"Nested containers within this container. Allows for hierarchical organization\nof fields with different visual styles.\n","items":{"type":"object","properties":{"label":{"type":"string","description":"The heading text displayed for this nested container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this nested container.\n"}}}}}}}}}},"additionalProperties":true},"init":{"type":"object","description":"Configuration for custom JavaScript initialization that executes when the form is first loaded.\n\nThis object defines a JavaScript hook that prepares the form for use, sets initial field values,\nperforms validation, or otherwise customizes the form behavior before it is displayed to the user.\n\n**Function signature**\n\nThe initialization function is invoked with a single 'options' argument containing contextual information:\n```javascript\nfunction formInit(options) {\n  // Process options and return the form object\n  return options.resource.settingsForm.form;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.resource` - The current resource being configured\n- `options.parentResource` - The parent of the current resource\n- `options.grandparentResource` - The grandparent of the current resource\n- `options.license` - For integration apps, the license provisioned to the integration\n- `options.parentLicense` - For integration apps, the parent of the license\n\n\n**Common uses**\n\n- Dynamically generate field options based on resource configuration\n- Pre-populate default values from related resources\n- Apply conditional logic that depends on resource properties\n- Add, remove, or modify form fields based on user permissions or account settings\n- Fetch external data to populate selection options\n- Implement complex validation rules that depend on resource context\n- Create branching form experiences based on user selections\n\n**Return value**\n\nThe function must return a valid form object that the UI can render.\nThrowing an exception will signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called when the form\nis initialized and should handle any custom setup logic.\n\nThe function must follow the expected signature and return a valid form object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the initialization function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}}}},"Settings":{"type":"object","description":"Configuration settings that can be accessed by hooks, filters, mappings and handlebars templates at runtime.\n\nIt enables customization of the resource's logic, allowing hooks, mappings, filters, and\nhandlebars to access and apply the settings at runtime.\n\n**Usage**\n\nThe settings object can store arbitrary JSON data that you want to save with the resource.\nWhile it's often populated through a form defined in the `settingsForm` field, you can also:\n\n- Directly provide JSON settings without using a form\n- Store configuration values used by hooks and templates\n- Create resource-specific constants and parameters\n- Maintain lookup tables or mapping structures\n- Define conditional logic parameters\n\n**Accessibility**\n\nSettings are available in:\n- All handlebars fields for building dynamic payloads\n- Field mapping expressions\n- JavaScript hooks via the options object\n- Filters and transformations\n\n**Best practices**\n\nFor non-technical users, create a custom form instead of editing the JSON directly.\nThis provides a user-friendly interface for updating settings without requiring JSON knowledge.\n","additionalProperties":true},"MockOutput":{"type":"object","description":"Sample data that simulates the output from an export for testing and configuration purposes.\n\nMock output allows you to configure and test flows without executing the actual export or\nwaiting for real-time data to arrive. This is particularly useful for:\n- Initial flow configuration and testing\n- Mapping development without requiring live data\n- Generating metadata for downstream flow steps\n- Creating realistic test scenarios\n- Documenting expected data structures\n\n**Structure**\n\nThe mock output must follow the integrator.io canonical format, which consists of a\n`page_of_records` array containing record objects. Each record object has a `record`\nproperty that contains the actual data fields.\n\n```json\n{\n  \"page_of_records\": [\n    {\n      \"record\": {\n        \"field1\": \"value1\",\n        \"field2\": \"value2\",\n        ...\n      }\n    },\n    ...\n  ]\n}\n```\n\n**Usage**\n\nWhen executing a test run or configuring a flow, integrator.io will use this mock output\ninstead of executing the export to retrieve live data. This allows you to:\n- Test mappings with representative data\n- Configure downstream flow steps without waiting for real data\n- Simulate various data scenarios\n\n**Limitations**\n\n- Maximum of 10 records\n- Maximum size of 1 MB\n- Must follow the canonical format shown above\n\nMock output can be populated automatically from preview data or entered manually.\n","properties":{"page_of_records":{"type":"array","description":"Array of record objects in the integrator.io canonical format.\n\nEach item in this array represents one record that would be processed\nby the flow during execution.\n","items":{"type":"object","properties":{"record":{"type":"object","description":"Container for the actual record data fields.\n\nThe structure of this object will vary depending on the specific\nexport configuration and the source system's data structure.\n","additionalProperties":true}}}}}},"PreSave":{"type":"object","description":"Defines a JavaScript hook that executes before the resource is saved.\n\nThis hook allows for programmatic validation, transformation, or enrichment of the\nresource itself before it is persisted. It can be used to enforce business rules,\nset derived properties, or implement cross-field validations that can't be expressed\nthrough the standard UI.\n\n**Function signature**\n\nThe preSave function is invoked with a single 'options' argument containing:\n```javascript\nfunction preSave(options) {\n  // Process options and return the modified resource\n  return options.newResource;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.newResource` - The resource being saved (with pending changes)\n- `options.oldResource` - The previous version of the resource (before changes)\n\n\n**Common uses**\n\n- Enforcing complex business rules across multiple fields\n- Automatically deriving field values based on other configuration\n- Performing validation that depends on external systems or data\n- Normalizing or standardizing configuration values\n- Adding computed or derived properties\n- Implementing versioning or change tracking\n- Dynamically looking up data using the Celigo API module to enrich configuration\n\n**Return value**\n\nThe function must return the newResource object (potentially modified) to be saved.\nThrowing an exception will prevent saving and signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called just before\nthe resource is saved.\n\nThe function must follow the expected signature and return the resource object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the preSave function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}},"Response":{"type":"object","description":"Complete export object as returned by the API","allOf":[{"$ref":"#/components/schemas/Request"},{"$ref":"#/components/schemas/ResourceResponse"},{"$ref":"#/components/schemas/IAResourceResponse"},{"type":"object","properties":{"aiDescription":{"$ref":"#/components/schemas/AIDescription"},"apim":{"$ref":"#/components/schemas/APIM"},"apiIdentifier":{"type":"string","readOnly":true,"description":"API identifier assigned to this export."},"_sourceId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the source resource this export was created from."},"_templateId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the template used to create this export."},"draft":{"type":"boolean","readOnly":true,"description":"Indicates whether this export is in draft state."},"draftExpiresAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the draft version of this export expires."},"debugUntil":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp until which debug logging is enabled for this export."}}}]},"ResourceResponse":{"type":"object","description":"Core response fields shared by all Celigo resources","properties":{"_id":{"type":"string","format":"objectId","readOnly":true,"description":"Unique identifier for the resource.\n\nThe _id is used in:\n- API endpoints that operate on a specific resource (e.g., GET, PUT, DELETE)\n- References from other resources (e.g., flows that use this resource)\n- Job history and error tracking\n\nFormat: 24-character hexadecimal string\n"},"createdAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was initially created.\n\nThis read-only field is automatically set during resource creation and cannot\nbe modified. It provides an audit trail for when the resource was first added\nto the system, which can be useful for:\n\n- Resource lifecycle management\n- Audit and compliance reporting\n- Troubleshooting integration timelines\n- Identifying older resources that may need review\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"},"lastModified":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was most recently updated.\n\nThis read-only field is automatically updated whenever any property of the\nresource is modified. It provides an audit trail that can be used for:\n\n- Determining if a resource has changed since it was last reviewed\n- Monitoring configuration changes during troubleshooting\n- Implementing cache invalidation strategies\n- Synchronizing related resources based on modification time\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix)\nand will always be equal to or later than the createdAt timestamp.\n"},"deletedAt":{"type":["string","null"],"format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was marked for deletion.\n\nWhen this field is present and contains a valid timestamp, it indicates\nthat the resource has been soft-deleted (moved to the recycle bin) but not\nyet permanently removed from the system. This allows for recovery of\naccidentally deleted resources within a specified retention period.\n\nThe deletedAt timestamp enables:\n- Filtering deleted resources from active resource listings\n- Implementing time-based retention policies for permanent deletion\n- Tracking deletion events for audit and compliance purposes\n- Resource recovery workflows with clear timeframes\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\nWhen null or absent, the resource is considered active.\n"}},"required":["_id"]},"IAResourceResponse":{"type":"object","description":"Integration app response fields for resources that are part of integration apps","properties":{"_integrationId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the specific integration instance that contains this resource.\n\nThis field is only populated for resources that are part of an integration app\ninstallation. It contains the unique identifier (_id) of the integration\nresource that was installed in the account.\n\nThe integration instance represents a specific installed instance of an\nintegration app, with its own configuration, settings, and runtime environment.\n\nThis reference enables:\n- Tracing the resource back to its parent integration instance\n- Permission and access control based on integration ownership\n- Lifecycle management (enabling/disabling, updating, or uninstalling)\n"},"_connectorId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the integration app that defines this resource.\n\nThis field is only populated for resources that are part of an integration app.\nIt contains the unique identifier (_id) of the integration app (connector)\nthat defines the structure, behavior, and templates for this resource.\n\nThe integration app is the published template that can be installed\nmultiple times across different accounts, with each installation creating\na separate integration instance (referenced by _integrationId).\n\nThis reference enables:\n- Identifying the source integration app for this resource\n- Determining which template version is being used\n- Linking to documentation, support, and marketplace information\n"}}},"AIDescription":{"type":"object","description":"AI-generated descriptions and documentation for the resource.\n\nThis object contains automatically generated content that helps users\nunderstand the purpose, behavior, and configuration of the resource without\nrequiring them to analyze the technical details. The AI-generated content\nis sanitized and safe for display in the UI.\n","properties":{"summary":{"type":"string","description":"Brief AI-generated summary of the resource's purpose and functionality.\n\nThis concise description provides a quick overview of what the resource does,\nwhat systems it interacts with, and its primary role in the integration.\nThe summary is suitable for display in list views, dashboards, and other\ncontexts where space is limited.\n\nMaximum length: 10KB\n"},"detailed":{"type":"string","description":"Comprehensive AI-generated description of the resource's functionality.\n\nThis detailed explanation covers the resource's purpose, configuration details,\ndata flow patterns, filtering logic, and other technical aspects. It provides\nin-depth information suitable for documentation, tooltips, or detailed views\nin the administration interface.\n\nThe content may include HTML formatting for improved readability.\n\nMaximum length: 10KB\n"},"generatedOn":{"type":"string","format":"date-time","description":"Timestamp indicating when the AI description was generated.\n\nThis field helps track the freshness of the AI-generated content and\ndetermine when it might need to be regenerated due to changes in the\nresource's configuration or behavior.\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"}}},"APIM":{"type":"array","description":"Read-only field that stores information about the integration resources\npublished in the API Management (APIM) platform.\n\nThis field tracks the relationship between integrator.io resources and their\npublished counterparts in the Gravitee API Management platform, which is\ntightly integrated with the Celigo UI. When resources are \"pushed\" to Gravitee,\nthis field is populated with the relevant identifiers and statuses.\n","items":{"type":"object","properties":{"apiId":{"type":"string","description":"Identifier for the API where this integrator.io resource is published in the APIM.\n\nThis is a Gravitee resource identifier (not prefixed with underscore like Celigo IDs)\nthat uniquely identifies the API in the API Management platform.\n"},"flowId":{"type":"string","description":"Identifier for the flow within the API where this integrator.io resource is linked.\n\nWhen an API has multiple integrator.io resources linked, each resource is associated\nwith a specific flow in the API, identified by this field. This is a Gravitee\nresource identifier.\n"},"status":{"type":"string","description":"Indicates the publishing stage of the integrator.io resource in APIM.\n\nPossible values:\n- 'oaspending': The resource is published but the OpenAPI Specification (OAS) is not\n  yet published. The apiId will be updated with the API ID created in APIM.\n- 'published': The OpenAPI Specification for the integrator.io resource has been\n  successfully uploaded to APIM.\n","enum":["oaspending","published"]}}}},"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}},"responses":{"400-bad-request":{"description":"Bad request. The server could not understand the request because of malformed syntax or invalid parameters.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}},"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"422-unprocessable-entity":{"description":"Unprocessable entity. The request was well-formed but was unable to be followed due to semantic errors.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}}},"paths":{"/v1/exports":{"post":{"summary":"Create an export","description":"Creates a new export configuration that can be used to retrieve data from applications\nor external sources.\n","operationId":"createExport","tags":["Exports"],"requestBody":{"required":true,"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Request"}}}},"responses":{"201":{"description":"Export created successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Response"}}}},"400":{"$ref":"#/components/responses/400-bad-request"},"401":{"$ref":"#/components/responses/401-unauthorized"},"422":{"$ref":"#/components/responses/422-unprocessable-entity"}}}}}}
````

## Get an export

> Returns the complete configuration of a specific export.<br>

````json
{"openapi":"3.1.0","info":{"title":"Exports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"Response":{"type":"object","description":"Complete export object as returned by the API","allOf":[{"$ref":"#/components/schemas/Request"},{"$ref":"#/components/schemas/ResourceResponse"},{"$ref":"#/components/schemas/IAResourceResponse"},{"type":"object","properties":{"aiDescription":{"$ref":"#/components/schemas/AIDescription"},"apim":{"$ref":"#/components/schemas/APIM"},"apiIdentifier":{"type":"string","readOnly":true,"description":"API identifier assigned to this export."},"_sourceId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the source resource this export was created from."},"_templateId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the template used to create this export."},"draft":{"type":"boolean","readOnly":true,"description":"Indicates whether this export is in draft state."},"draftExpiresAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the draft version of this export expires."},"debugUntil":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp until which debug logging is enabled for this export."}}}]},"Request":{"type":"object","description":"Fields that can be sent when creating or updating an export","properties":{"name":{"type":"string","description":"Descriptive identifier for the export resource in human-readable format.\n\nThis string serves as the primary display name for the export across the application UI and is used in:\n- API responses when listing exports\n- Error and audit logs for traceability\n- Flow builder UI components\n- Job history and monitoring dashboards\n\nWhile not required to be globally unique in the system, using descriptive, unique names is strongly recommended\nfor clarity when managing multiple integrations. The name should indicate the data source and purpose.\n\nMaximum length: 255 characters\nAllowed characters: Letters, numbers, spaces, and basic punctuation\n"},"description":{"type":"string","description":"Optional free-text field that provides additional context about the export's purpose and functionality.\n\nWhile not used for operational functionality in the API, this field serves several important purposes:\n- Helps document the intended data flow for this export\n- Provides context for other developers and systems interacting with this resource\n- Appears in the admin UI and export listings for easier identification\n- Can be used by AI agents to better understand the export's purpose when making recommendations\n\nBest practice is to include information about:\n- The source system and data being exported\n- The intended destination for this data\n- Any special filtering or business rules applied\n- Dependencies on other systems or processes\n\nMaximum length: 10240 characters\n","maxLength":10240},"_connectionId":{"format":"objectId","type":"string","description":"Reference to the connection resource that this export will use to access the external system.\n\nThis field contains the unique identifier of a connection resource that must exist in the system prior to creating the export.\nThe connection provides:\n- Authentication credentials and methods for the external system\n- Base URL and connectivity settings\n- Rate limiting and retry configurations\n- Connection-specific headers and parameters\n\nThe connection type must be compatible with the adaptorType specified for this export.\nFor example, if adaptorType is \"HTTPExport\", _connectionId must reference a connection with type \"http\".\n\nThis field is not required for webhook/listener exports.\n\nFormat: 24-character hexadecimal string\n"},"adaptorType":{"type":"string","description":"Specifies the underlying technology adapter that processes this export's operations.\n\nThis field determines:\n- Which connection types are compatible with this export\n- Which API endpoints and protocols will be used\n- Which export-specific configuration objects must be provided\n- The available features and capabilities of the export\n\nThe value must match an available adapter in the system and should correspond to the\nexternal system being accessed. For example:\n- \"HTTPExport\" for generic REST/SOAP APIs\n- \"SalesforceExport\" for Salesforce-specific operations\n- \"NetSuiteExport\" for NetSuite-specific operations\n- \"FTPExport\" for file transfers via FTP/SFTP\n- \"WebhookExport\" for realtime event listeners that receive data via incoming HTTP requests.\n\nWhen creating an export, this field must be set correctly and cannot be changed afterward\nwithout creating a new export resource.\n\nIMPORTANT: When using a specific adapter type (e.g., \"SalesforceExport\"), you must also\nprovide the corresponding configuration object (e.g., \"salesforce\").\n","enum":["HTTPExport","FTPExport","AS2Export","S3Export","NetSuiteExport","SalesforceExport","JDBCExport","RDBMSExport","MongodbExport","DynamodbExport","WrapperExport","SimpleExport","WebhookExport","FileSystemExport"]},"type":{"type":"string","description":"Defines the fundamental operational mode of the export resource. This field determines:\n- What data is extracted and how\n- Which configuration objects are required\n- How the export appears and functions in the flow builder UI\n- The export's scheduling and execution behavior\n\n**Export types and their configurations**\n\n**Standard Export (undefined/null)**\n- **Behavior**: Retrieves all available records from the source system or structured file data that needs parsing. Default behavior is to get all records from the source system.\n- **UI Appearance**: \"Export\", \"Lookup\", or \"Transfer\" (depending on configuration)\n- **Use Case**: General purpose data extraction, full data synchronization, or structured file parsing (CSV, XML, JSON, etc.)\n- **Important Note**: For file exports that PARSE file contents into records (e.g., CSV files from NetSuite file cabinet), use this standard export type (null/undefined) with the connector's file configuration (e.g., netsuite.file). Do NOT use type=\"blob\" for parsed file exports.\n\n**\"delta\"**\n- **Behavior**: Retrieves only records changed since the last execution\n- **UI Appearance**: \"Export\" or \"Lookup\" (based on isLookup field value)\n- **Required Config**: Must provide the \"delta\" object with dateField configuration\n- **Use Case**: Incremental data synchronization, change detection\n- **Dependencies**: Requires a system that supports timestamp-based filtering\n\n**\"test\"**\n- **Behavior**: Retrieves a limited subset of records (for testing purposes)\n- **UI Appearance**: \"Export\" or \"Lookup\" (based on isLookup field value)\n- **Required Config**: Must provide the \"test\" object with limit configuration\n- **Use Case**: Integration development, testing, and validation\n\n**\"once\"**\n- **Behavior**: Retrieves records one time and marks them as processed in the source\n- **UI Appearance**: \"Export\" or \"Lookup\" (based on isLookup field value)\n- **Required Config**: Must provide the \"once\" object with booleanField configuration\n- **Use Case**: One-time exports, ensuring records aren't processed twice\n- **Dependencies**: Requires a system with updateable boolean/flag fields\n\n**\"blob\"**\n- **Behavior**: Retrieves raw files without parsing them into structured data records. The file content is transferred as-is without any parsing or transformation.\n- **UI Appearance**: \"Transfer\" flow step\n- **Required Config**: Configuration varies by connector (e.g., filepath for FTP, http.type=\"file\" for HTTP, netsuite.blob for NetSuite)\n- **Use Case**: Raw file transfers for binary files (images, PDFs, executables) where file content should NOT be parsed into data records\n- **Important Note**: Do NOT use \"blob\" when you want to parse file contents into records. For file parsing (CSV, XML, JSON files), leave type as null/undefined and configure the connector's file object (e.g., netsuite.file for NetSuite). The \"blob\" type is specifically for transferring files without parsing them.\n\n**\"webhook\"**\n- **Behavior**: Creates an endpoint that listens for incoming HTTP requests\n- **UI Appearance**: \"Listener\" flow step\n- **Required Config**: Must provide the \"webhook\" object with security configurations\n- **Use Case**: Real-time integration, event-driven architecture\n- **Dependencies**: Requires external system capable of making HTTP calls\n\n**\"distributed\"**\n- **Behavior**: Creates an endpoint that listens for incoming requests from NetSuite or Salesforce\n- **UI Appearance**: \"Listener\" flow step\n- **Required Config**: Must provide the \"distributed\" object with security configurations\n- **Use Case**: Real-time integration, event-driven architecture for NetSuite or Salesforce\n- **Dependencies**: Requires NetSuite or Salesforce to be configured to send events to the endpoint\n\n**\"simple\"**\n- **Behavior**: Allows for direct file uploads via the data loader UI\n- **UI Appearance**: \"Data loader\" flow step\n- **Required Config**: Must provide the \"simple\" object with file format configuration\n- **Use Case**: Manual data uploads, user-driven data integration\n\nThe value directly affects which configuration objects must be provided in the export resource.\nFor example, if type=\"delta\", you must include a valid \"delta\" object in your configuration.\n","enum":["webhook","test","delta","once","tranlinedelta","simple","blob","distributed","stream"]},"pageSize":{"type":"integer","description":"Controls the number of records in each data page when streaming data between systems.\n\nThis field directly impacts how data is streamed from the source to destination system:\n- Records are exported in batches (pages) of this size\n- Each page is immediately sent to the destination system upon completion\n- Pages are capped at a maximum size of 5 MB regardless of record count\n- Processing continues with the next page until all data is transferred\n\nConsiderations for setting this value:\n- The destination system's API often imposes limits on batch sizes\n  (e.g., NetSuite and Salesforce have specific record limits per API call)\n- Larger values improve throughput for simple records but may cause timeouts with complex data\n- Smaller values provide more granular error recovery but increase the number of API calls\n- Finding the optimal value typically requires balancing source system export speed with\n  destination system import capacity\n\nThe value must be a positive integer. If not specified, the default value is 20.\nThere is no built-in maximum value, but practical limits are determined by:\n1. The 5 MB maximum page size limit\n2. The destination system's API constraints\n3. Memory and performance considerations\n","default":20},"dataURITemplate":{"type":"string","description":"Defines a template for generating direct links to records in the source application's UI.\n\nThis field uses handlebars syntax to create dynamic URLs or identifiers based on the exported data.\nThe template is evaluated for each record processed by the export, and the resulting URL is:\n- Stored with error records in the job history database\n- Displayed in the error logs and job monitoring UI\n- Available to downstream steps via the errorContext object\n\nThe template can reference any field in the exported record using the handlebars pattern:\n{{record.fieldName}}\n\nCommon patterns by system type:\n- Salesforce: \"https://my.salesforce.com/lightning/r/Contact/{{record.Id}}/view\"\n- NetSuite: \"https://system.netsuite.com/app/common/entity/custjob.nl?id={{record.internalId}}\"\n- Shopify: \"https://your-store.myshopify.com/admin/customers/{{record.id}}\"\n- Generic APIs: \"{{record.id}}\" or \"{{record.customer_id}}, {{record.email}}\"\n\nThis field is optional but recommended for improved error handling and debugging.\n"},"traceKeyTemplate":{"type":"string","description":"Defines a template for generating unique identifiers for each record processed by this export.\n\nThis field allows you to override the system's default record identification logic by specifying\nexactly which field(s) should be used to uniquely identify each record. The trace key is used to:\n- Track records through the entire integration process\n- Identify duplicate records in the job history\n- Match updated records to previously processed ones\n- Generate unique references in error reporting\n\nThe template uses handlebars syntax and can reference:\n- Single fields: {{record.id}}\n- Combined fields: {{join \"_\" record.customerId record.orderId}}\n- Modified fields: {{lowercase record.email}}\n\nIf a transformation is applied to the exported data before the trace key is evaluated,\nfield references should omit the \"record.\" prefix (e.g., {{id}} instead of {{record.id}}).\n\nIf not specified, the system attempts to identify a unique field in each record automatically,\nbut this may not always select the optimal field for identification.\n\nMaximum length of generated trace keys: 512 characters\n"},"oneToMany":{"$ref":"#/components/schemas/OneToMany"},"pathToMany":{"$ref":"#/components/schemas/PathToMany"},"isLookup":{"type":"boolean","description":"Controls whether this export operates as a lookup resource in integration flows.\n\nWhen set to true, this export's behavior fundamentally changes:\n- It expects and requires input data from a previous flow step\n- It uses input data to dynamically parameterize the export operation\n- The system injects input record fields into API requests via handlebars templates\n- Flow execution waits for this step to complete before proceeding\n- Results are directly passed to subsequent steps\n\nLookup exports are typically used to:\n- Retrieve additional details about records processed earlier in the flow\n- Find matching records in a target system for reference or update operations\n- Enrich data with information from external services\n- Validate data against reference sources\n\nAPI behavior differences when true:\n- Request templating uses both record context and other handlebars variables\n- Export is executed once per input record (or batch, depending on configuration)\n- Rate limiting and concurrency controls apply differently\n\nWhen false (default), the export operates in standard extraction mode, pulling data\nindependently without requiring input from previous flow steps.\n"},"groupByFields":{"$ref":"#/components/schemas/GroupBy"},"delta":{"$ref":"#/components/schemas/Delta"},"test":{"$ref":"#/components/schemas/Test"},"once":{"$ref":"#/components/schemas/Once"},"webhook":{"$ref":"#/components/schemas/Webhook"},"distributed":{"$ref":"#/components/schemas/Distributed"},"filesystem":{"$ref":"#/components/schemas/FileSystem"},"simple":{"type":"object","description":"Configuration for data loader exports that only run in data loader specific flows.\nNote: This field and all its properties are only relevant when the 'type' field is set to 'simple'.\n","properties":{"file":{"$ref":"#/components/schemas/File"}}},"http":{"$ref":"#/components/schemas/Http"},"file":{"$ref":"#/components/schemas/File"},"salesforce":{"$ref":"#/components/schemas/Salesforce"},"as2":{"$ref":"#/components/schemas/AS2"},"dynamodb":{"$ref":"#/components/schemas/DynamoDB"},"ftp":{"$ref":"#/components/schemas/FTP"},"jdbc":{"$ref":"#/components/schemas/JDBC"},"mongodb":{"$ref":"#/components/schemas/MongoDB"},"netsuite":{"$ref":"#/components/schemas/NetSuite"},"rdbms":{"$ref":"#/components/schemas/RDBMS"},"s3":{"$ref":"#/components/schemas/S3"},"wrapper":{"$ref":"#/components/schemas/Wrapper"},"parsers":{"$ref":"#/components/schemas/Parsers"},"filter":{"allOf":[{"description":"Configuration for selectively processing records from an export based on their field values.\nThis object enables precise control over which records continue through the flow.\n\n**Filter behavior**\n\nWhen configured, the filter is applied immediately after records are retrieved from the source system:\n- Records that match the filter criteria continue through the flow\n- Records that don't match are silently dropped\n- No partial record processing is performed\n\n**Available filter fields**\nThe fields available for filtering are the data fields from each record retrieved by the export.\n"},{"$ref":"#/components/schemas/Filter"}]},"inputFilter":{"allOf":[{"description":"Configuration for selectively processing input records in a lookup export.\n\nThis filter is only relevant for exports where `isLookup` is set to `true`, meaning\nthe export is being used as a flow step to retrieve additional data for records\nprocessed in previous steps.\n\n**Input filter behavior**\n\nWhen configured in a lookup export, this filter is applied to the incoming records\nbefore they are used to query the external system:\n- Only input records that match the filter criteria will trigger lookup operations\n- Records that don't match will pass through the step without being enriched\n- This can significantly improve performance by reducing unnecessary API calls\n\n**Use cases**\n\nCommon scenarios for using inputFilter include:\n- Only looking up additional data for records that meet certain criteria\n- Preventing API calls for records that already have the required data\n- Implementing conditional lookup logic based on record properties\n- Reducing API call volume to stay within rate limits\n\n**Available filter fields**\nThe fields available for filtering are the data fields from the input records\npassed to this lookup export from previous flow steps.\n"},{"$ref":"#/components/schemas/Filter"}]},"mappings":{"allOf":[{"description":"Field mapping configurations applied to the input records of a\nlookup export before the lookup HTTP request is made.\n\n**When this field is valid**\n\nCeligo only supports `mappings` on exports that meet **both**\nof the following conditions:\n\n1. `isLookup` is `true` (the export is being used as a\n   lookup step, not a source export), AND\n2. `adaptorType` is `\"HTTPExport\"` (generic REST/SOAP\n   HTTP lookup — not NetSuite, Salesforce, RDBMS, file-based,\n   or any other adaptor).\n\nFor any other combination (source exports, non-HTTP lookup\nexports), do not set this field.\n\n**Behavior when valid**\n\nWhen used on a lookup HTTP export, `mappings` transforms\neach incoming record from the upstream flow step before the\nlookup HTTP call is made:\n\n- Input records are reshaped according to the mapping rules.\n- The transformed record flows into the HTTP request — either\n  directly as the request body, or further shaped by an\n  `http.body` Handlebars template which then renders\n  against the post-mapped record.\n- Useful when the lookup target API expects a request\n  structure that differs from the upstream record shape.\n"},{"$ref":"#/components/schemas/Mappings"}]},"transform":{"allOf":[{"description":"Data transformation configuration for reshaping records during export operations.\n\n**Export-specific behavior**\n\n**Source Exports**: Transforms records retrieved from the external system before they are passed to downstream flow steps.\n\n**Lookup Exports (isLookup: true)**: Transforms the lookup results returned by the external system.\n\n**Critical requirement for lookups**\n\nFor NetSuite and most other API-based lookups, this field is **ESSENTIAL**. Raw lookup results often come in nested or complex formats that differ from what the flow requires. You **MUST** use a transform to:\n1. Flatten nested structures (e.g., `results[0].id` -> `id`)\n2. Map specific fields to the top level\n3. Handle empty results gracefully\n\nNote that transformed results are not automatically merged back into source records - merging is handled separately by the 'response mapping' configuration in your flow definition.\n"},{"$ref":"#/components/schemas/Transform"}]},"hooks":{"type":"object","description":"Defines custom JavaScript hooks that execute at specific points during the export process.\n\nThese hooks allow for programmatic intervention in the data flow, enabling custom transformations,\nvalidations, filtering, and error handling beyond what's possible with standard configuration.\n","properties":{"preSavePage":{"type":"object","description":"Hook that executes after records are retrieved from the source system but before\nthey are sent to downstream flow steps.\n\nThis hook can transform, filter, validate, or enrich each page of data before it\nenters subsequent flow steps. Common uses include flattening nested data structures,\nremoving unwanted records, or adding computed fields.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId.\n"},"_scriptId":{"type":"string","format":"objectId","description":"Reference to a predefined script resource containing hook functions.\n\nThe referenced script should contain the function specified in the\n'function' property.\n"},"_stackId":{"type":"string","format":"objectId","description":"Reference to the stack resource associated with this hook.\n\nUsed when the hook logic is part of a stack deployment.\n"},"configuration":{"type":"object","description":"Custom configuration object passed to the hook function.\n\nThis allows passing static parameters or settings to the hook script, making the script\nreusable across different exports with different configurations.\n"}}}}},"settingsForm":{"$ref":"#/components/schemas/Form"},"settings":{"$ref":"#/components/schemas/Settings"},"mockOutput":{"$ref":"#/components/schemas/MockOutput"},"_ediProfileId":{"type":"string","format":"objectId","description":"Reference to an EDI profile that this export will use for parsing X12 EDI documents.\n\nThis field contains the unique identifier of an EDI profile resource that must exist\nin the system prior to creating or updating the export. For parsing operations, the\nEDI profile provides essential settings such as:\n- Envelope-level specifications for X12 EDI documents (ISA and GS qualifiers)\n- Trading partner identifiers and qualifiers needed for validation\n- Delimiter configurations used to properly parse the document structure\n- Version information to ensure correct segment and element interpretation\n- Validation rules to verify EDI document compliance with standards\n\nIn the export context, an EDI profile is specifically required when:\n- Parsing incoming EDI documents into a structured JSON format\n- Extracting data elements from raw EDI files\n- Validating incoming EDI document structure against trading partner requirements\n- Converting EDI segments and elements into a format usable by downstream flow steps\n\nThe centralized profile approach ensures parsing consistency across all exports\nand prevents scattered configuration of parsing rules across multiple resources.\n\nFormat: 24-character hexadecimal string\n"},"_postParseListenerId":{"type":"string","format":"objectId","description":"Reference to a webhook export that will be automatically invoked after EDI parsing operations.\n\nThis field contains the unique identifier of another export resource (of type \"webhook\")\nthat will be called when an EDI file is processed, regardless of parsing success or failure.\n\n**Invocation behavior**\n\nThe listener is invoked once per file, with the following behaviors:\n\n| Scenario | Behavior |\n| --- | --- |\n| Successfully parsed EDI file | The listener is invoked with the parsed payload, with no error fields present |\n| Unable to parse EDI file | The listener is invoked with the payload and error information in the payload |\n\n**Supported adapter types**\n\nCurrently, this functionality is only supported for:\n- AS2Export (when parsing EDI files)\n- FTPExport (when parsing EDI files)\n\nSupport for additional adapters will be added in future releases.\n\n**Primary use case**\n\nThe primary purpose of this field is to enable automatic sending of functional acknowledgements\n(such as 997 or 999) after receiving EDI documents, whether the parse was successful or not.\nThis allows for immediate feedback to trading partners about document receipt and processing status.\n\nFormat: 24-character hexadecimal string\n"},"preSave":{"$ref":"#/components/schemas/PreSave"},"assistant":{"type":"string","description":"Identifier for the connector assistant used to configure this export."},"assistantMetadata":{"type":"object","additionalProperties":true,"description":"Metadata associated with the connector assistant configuration."},"sampleData":{"type":"string","description":"Sample data payload used for previewing and testing the export."},"sampleHeaders":{"type":"array","description":"Sample HTTP headers used for previewing and testing the export.","items":{"type":"object","properties":{"name":{"type":"string","description":"Header name."},"value":{"type":"string","description":"Header value."}}}},"sampleQueryParams":{"type":"array","description":"Sample query parameters used for previewing and testing the export.","items":{"type":"object","properties":{"name":{"type":"string","description":"Query parameter name."},"value":{"type":"string","description":"Query parameter value."}}}}},"required":["name"]},"OneToMany":{"type":"boolean","description":"Controls whether the resource treats child records within parent records as the primary data units.\n\n**Important: this is not for specifying where records are in an api response**\n\nIf you need to tell an export where to find the array of records in the HTTP response\nbody (e.g. \"the records are at data.items\"), use `http.response.resourcePath` instead.\n`oneToMany` serves a completely different purpose — it operates on records that have\nalready been extracted from the response.\n\n**What oneToMany actually does**\n\nWhen set to true, this field fundamentally changes how record data is processed:\n- The system will \"unwrap\" nested child records from their parent containers\n- Each child record becomes a separate output record for downstream processing\n- The pathToMany field must be set to indicate where these child records are located\n- Parent record fields can still be accessed via a special \"parent\" context\n\nThis is typically used on **lookup exports** (isLookup: true) or **imports** where\nthe incoming records contain nested arrays that need to be fanned out.\n\nCommon scenarios for enabling this option:\n- Processing order line items individually from an order export\n- Handling invoice line items from an invoice export\n- Processing individual transaction lines from journal entries\n- Extracting address records from customer exports\n\nThis setting applies for the duration of the current flow step only and does not affect\nhow data is stored or structured in other flow steps.\n\nIf false (default), the resource processes each top-level record as a single unit.\n","default":false},"PathToMany":{"type":"string","description":"Specifies the JSON path to child records when oneToMany mode is enabled.\n\nThis field is only used when oneToMany is set to true. It defines the exact location\nof child records within the parent record structure using dot notation:\n\n- Simple path: \"items\" for a direct child array field\n- Nested path: \"lines.lineItems\" for a more deeply nested array\n- Multi-level: \"details.items.subitems\" for deeply nested structures\n\nThe system uses this path to:\n- Locate the array of child records within each parent record\n- Extract each array element as a separate record for processing\n- Make both the child record data and parent context available to downstream steps\n\nImportant considerations:\n- The path must point to an array field\n- For row-based data (i.e. where Celigo models this via an array or arrays of objects), this field is not required\n- If the path is invalid or doesn't exist, the resource will report success but process zero records\n- Maximum path depth: 10 levels\n\nThis field must contain a valid JSON path expression using dot notation.\n"},"GroupBy":{"type":"array","description":"Specifies which fields to use for grouping records in the export results. When configured, records with\nthe same values in these fields will be grouped together and treated as a single record by downstream\nsteps in your flow.\n\nFor example:\n- Group sales orders by customer ID to process all orders for each customer together\n- Group journal entries by accounting period to consolidate related transactions\n- Group inventory items by location to process inventory by warehouse\n\nWhen grouping is used, the export's page size determines the maximum number of groups per page, not individual\nrecords. Note that effective grouping typically requires that records with the same group field values appear\ntogether in the export data.\n","items":{"type":"string"}},"Delta":{"type":"object","description":"Configuration object for incremental data exports that retrieve only changed records.\n\nThis object is REQUIRED when the export's type field is set to \"delta\" and should not be\nincluded for other export types. Delta exports are designed for efficient synchronization\nby retrieving only records that have been created or modified since the last execution.\n\n**Default cutoff behavior (NO USER-SUPPLIED CUTOFF)**\nIf the user prompt does not specify a cutoff timestamp, delta exports MUST default to using\nthe platform-managed *last successful run* timestamp. In integrator.io this is exposed to\nHTTP exports and scripts as the `{{lastExportDateTime}}` variable.\n\n- First run: behaves like a full export (no cutoff available yet)\n- Subsequent runs: uses `{{lastExportDateTime}}` as the lower bound (cutoff)\n\nFor optimal AI agent implementation, consider these guidelines:\n\n1. Primary configuration method depends on adapter type:\n    - For HTTP exports: Use {{lastExportDateTime}} variable in relativeURI or body\n    - For specific application adapters: Use dateField to specify timestamp fields\n\n2. The system automatically maintains the last successful run timestamp\n    - No need to store or manage timestamps in your own code\n    - First run fetches all records (equivalent to a standard export)\n    - Subsequent runs use this timestamp as the starting point\n\n3. Error handling and recovery:\n    - If an export fails, the next run uses the last successful timestamp\n    - Records created/modified during a failed run will be included in the next run\n    - The lagOffset field can be used to handle edge cases\n","properties":{"dateField":{"type":"string","description":"Specifies one or more timestamp fields to filter records by modification date.\n\n**Field behavior**\n\nThis field determines which record timestamp(s) are compared against the last successful run time\nto identify changed records. Key characteristics:\n\n- REQUIRED for most adapter types (except HTTP and REST where this field is not supported)\n- Can reference a single field or multiple comma-separated fields\n- Field(s) must exist in the source system and contain valid date/time values\n- When multiple fields are specified, they are processed sequentially\n\n**Implementation patterns**\n\n**Single Field Pattern**\n```\n\"dateField\": \"lastModifiedDate\"\n```\n- Records where lastModifiedDate > last run time are exported\n- Most common pattern, suitable for most applications\n- Works when a single field reliably tracks all changes\n\n**Multiple Field Pattern**\n```\n\"dateField\": \"createdAt,lastModified\"\n```\n- First exports records where createdAt > last run time\n- Then exports records where lastModified > last run time\n- Useful when different operations update different timestamp fields\n- Handles cases where some records only have creation timestamps\n\n**Critical adaptor-specific instruction:**\n- The adaptor type is HTTP. For HTTP exports, the \"dateField\" property MUST NOT be included in the delta configuration.\n- HTTP exports use the {{lastExportDateTime}} variable directly in the relativeURI or body instead of dateField.\n- DO NOT include \"dateField\" in your response. If you include it, the configuration will be invalid.\n\nExample HTTP query with implicit delta:\n```\n\"/api/v1/users?modified_since={{lastExportDateTime}}\"\n\nExample (Business Central) newly created records:\n```\n\"/businesscentral/companies({{record.companyId}})/customers?$filter=systemCreatedAt gt {{lastExportDateTime}}\"\n```\n\nFor Salesforce, this field is required and has the following default values:\n- LastModifiedDate\n- CreatedDate\n- SystemModstamp\n- LastActivityDate\n- LastViewedDate\n- LastReferencedDate\nAlso, any custom fields that are not listed above but are timestamp fields will be added to the default values.\n```\n"},"dateFormat":{"type":"string","description":"Defines the date/time format expected by the source system's API.\n\n**Field behavior**\n\nThis field controls how the system formats the timestamp used for filtering:\n\n- OPTIONAL: Only needed when the source system doesn't support ISO8601\n- Default: ISO8601 format (YYYY-MM-DDTHH:mm:ss.sssZ)\n- Uses Moment.js formatting tokens\n- Directly affects the format of {{lastExportDateTime}} when used in HTTP requests\n\n**Implementation patterns**\n\n**Standard Date Format**\n```\n\"dateFormat\": \"YYYY-MM-DD\"  // 2023-04-15\n```\n- For APIs that accept date-only values\n- Will truncate time portion (potentially creating a wider filter window)\n\n**Custom DateTime Format**\n```\n\"dateFormat\": \"MM/DD/YYYY HH:mm:ss\"  // 04/15/2023 14:30:00\n```\n- For APIs with specific formatting requirements\n- Especially common with older or proprietary systems\n\n**Localized Format**\n```\n\"dateFormat\": \"DD-MMM-YYYY HH:mm:ss\"  // 15-Apr-2023 14:30:00\n```\n- For systems requiring locale-specific representations\n- Often needed for ERP systems or regional applications\n\nLeave this field unset unless the source system explicitly requires a non-ISO8601 format.\n"},"lagOffset":{"type":"integer","description":"Specifies a time buffer (in milliseconds) to account for system data propagation delays.\n\n**Field behavior**\n\nThis field addresses synchronization issues caused by replication or indexing delays:\n\n- OPTIONAL: Only needed for systems with known data visibility delays\n- Value is SUBTRACTED from the last successful run timestamp\n- Creates an overlapping window to catch records that were being processed\n  during the previous export\n- Measured in milliseconds (1000ms = 1 second)\n\n**Implementation pattern**\n\nThe formula for the effective filter date is:\n```\neffectiveFilterDate = lastSuccessfulRunTime - lagOffset\n```\n\n**Common values**\n\n- 15000 (15 seconds): Typical for systems with short indexing delays\n- 60000 (1 minute): Common for systems with moderate replication lag\n- 300000 (5 minutes): For systems with significant processing delays\n\n**Diagnosis**\n\nThis field should be configured when you observe:\n- Records occasionally missing from delta exports\n- Records created/modified near the export run time being skipped\n- Inconsistent results between runs with similar data changes\n\nIMPORTANT: Setting this value too high decreases efficiency by processing\nredundant records. Set only as high as needed to avoid missed records.\n"}}},"Test":{"type":"object","description":"Configuration object for limiting data volume during development and testing.\n\nThis object is REQUIRED when the export's type field is set to \"test\" and should not be\nincluded for other export types. Test exports are designed to safely retrieve small data\nsamples without processing full datasets, making them ideal for development and validation.\n\nFor optimal AI agent implementation, consider these guidelines:\n\n1. Test exports behave identically to standard exports except for the record limit\n    - All filters, pagination, and processing logic remain intact\n    - Only the total output volume is artificially capped\n\n2. Test exports do not store state between runs\n    - Unlike delta exports, each test export starts fresh\n    - No need to reset any state when transitioning from test to production\n\n3. Common implementation scenarios:\n    - During initial integration development\n    - When diagnosing data format or transformation issues\n    - When performance testing with controlled data volumes\n    - For demonstrations or proof-of-concept implementations\n\n4. Transitioning to production:\n    - Simply change type from \"test\" to null/undefined for standard exports\n    - Change type from \"test\" to \"delta\" for incremental exports\n    - No other configuration changes are typically needed\n","properties":{"limit":{"type":"integer","description":"Specifies the maximum number of records to process in a single test export run.\n\n**Field behavior**\n\nThis field controls the data volume during test executions:\n\n- REQUIRED when the export's type field is set to \"test\"\n- Accepts integer values between 1 and 100\n- Enforced by the system regardless of pagination settings\n- Applies to top-level records (before oneToMany processing)\n\n**Implementation considerations**\n\n**Balance between volume and usefulness**\n\nThe ideal limit depends on your testing objectives:\n\n- 1-5 records: Good for initial implementation and format verification\n- 10-25 records: Useful for testing transformation logic and identifying edge cases\n- 50-100 records: Better for performance testing and data pattern analysis\n\n**System enforced maximum**\n\n```\n\"limit\": 100  // Maximum allowed value\n```\n\nThe system enforces a hard limit of 100 records for all test exports to prevent\naccidental processing of large datasets during development.\n\n**Relationship with pageSize**\n\nThe test limit overrides but does not replace the export's pageSize:\n\n- If limit < pageSize: Only one page is processed with limit records\n- If limit > pageSize: Multiple pages are processed until limit is reached\n- Either way, the total records processed will not exceed the limit value\n\nIMPORTANT: When transitioning from test to production, you don't need to remove\nthis configuration - simply change the export's type field to remove the test limit.\n","minimum":1,"maximum":100}}},"Once":{"type":"object","description":"Configuration object for flag-based exports that process records exactly once.\n\nThis object is REQUIRED when the export's type field is set to \"once\" and should not be\nincluded for other export types. Once exports use a boolean/checkbox field in the source system\nto track which records have been processed, creating a reliable idempotent data extraction pattern.\n\nFor optimal AI agent implementation, consider these guidelines:\n\n1. System behavior during execution:\n    - First, the export retrieves all records where the specified boolean field is false\n    - After successfully processing these records, the system automatically sets the field to true\n    - On subsequent runs, previously processed records are excluded\n\n2. Prerequisites in the source system:\n    - The source must have a boolean/checkbox field that can be used as a processing flag\n    - Your connection must have write access to update this field after export\n    - The field should be indexed for optimal performance\n\n3. Common implementation scenarios:\n    - One-time migrations where data should not be duplicated\n    - Processing queues where records are marked as \"processed\"\n    - Compliance scenarios requiring audit trails of exported records\n    - Implementing exactly-once delivery semantics\n\n4. Error handling behavior:\n    - If the export fails, the boolean fields remain unchanged\n    - Records will be retried on the next run\n    - No manual intervention is required for recovery\n","properties":{"booleanField":{"type":"string","description":"Specifies the API field name of the boolean/checkbox that tracks processed records.\n\n**Field behavior**\n\nThis field identifies which boolean field in the source system controls the export filtering:\n\n- REQUIRED when the export's type field is set to \"once\"\n- Must reference a valid boolean/checkbox field in the source system\n- Must be writeable by the connection's authentication credentials\n- The system performs two operations with this field:\n  1. Filters to only include records where this field is false\n  2. Updates processed records by setting this field to true\n\n**Implementation patterns**\n\n**Using dedicated tracking fields**\n```\n\"booleanField\": \"isExported\"\n```\n- Create a dedicated field specifically for integration tracking\n- Provides clear separation between business and integration logic\n- Most maintainable approach for long-term operations\n\n**Using existing status fields**\n```\n\"booleanField\": \"isProcessed\"\n```\n- Leverage existing status fields if they align with your integration needs\n- Ensure the field's meaning is compatible with your integration logic\n- Consider potential conflicts with other processes using the same field\n\n**Targeted export tracking**\n```\n\"booleanField\": \"exported_to_netsuite\"\n```\n- For systems synchronizing to multiple destinations\n- Create separate tracking fields for each destination system\n- Enables independent control of different export processes\n\n**Technical considerations**\n\n- Field updates happen in batches after each successful page of records is processed\n- The field update uses the same connection as the export operation\n- For optimal performance, the boolean field should be indexed in the source database\n- Boolean values of 0/1, true/false, and yes/no are all properly interpreted\n\nIMPORTANT: Ensure the field is not being updated by other processes, as this could\ncause records to be skipped unexpectedly. If multiple processes need to track exports,\nuse separate boolean fields for each process.\n"}}},"Webhook":{"type":"object","description":"Configuration object for real-time event listeners that receive data via incoming HTTP requests.\n\nThis object is REQUIRED when the export's type field is set to \"webhook\" and should not be\nincluded for other export types. Webhook exports create dedicated HTTP endpoints that can receive\ndata from external systems in real-time, enabling event-driven integration architectures.\n\nWhen configured, the system:\n1. Creates a unique URL endpoint for receiving HTTP requests\n2. Validates incoming requests based on your security configuration\n3. Processes the payload and passes it to subsequent flow steps\n4. Returns a configurable HTTP response to the caller\n\nFor optimal AI agent implementation, consider these guidelines:\n\n**Webhook security models**\n\nWebhooks support multiple security verification methods, each requiring different fields:\n\n1. **HMAC Verification** (Most secure, recommended for production)\n    - Required fields: verify=\"hmac\", key, algorithm, encoding, header\n    - Verifies a cryptographic signature included with each request\n    - Ensures data integrity and authenticity\n\n2. **Token Verification** (Simple shared secret)\n    - Required fields: verify=\"token\", token, path\n    - Checks for a specific token value in the request\n    - Simpler but less secure than HMAC\n\n3. **Basic Authentication** (HTTP standard)\n    - Required fields: verify=\"basic\", username, password\n    - Uses HTTP Basic Authentication headers\n    - Compatible with most HTTP clients\n\n4. **Secret URL** (Simplest but least secure)\n    - Required fields: verify=\"secret_url\", token\n    - Relies solely on URL obscurity for security\n    - The token is embedded in the webhook URL to create a unique, hard-to-guess endpoint\n    - Suitable only for non-sensitive data or testing\n\n5. **Public Key** (Advanced, for specific providers)\n    - Required fields: verify=\"publickey\", key\n    - Uses public key cryptography for verification\n    - Only available for certain providers\n\n**Response customization**\n\nYou can customize how the webhook responds to callers with these field groups:\n\n1. **Standard Success Response**\n    - Fields: successStatusCode, successBody, successMediaType, successResponseHeaders\n    - Controls how the webhook responds to valid requests\n\n2. **Challenge Response** (For subscription verification)\n    - Fields: challengeSuccessBody, challengeSuccessStatusCode, challengeSuccessMediaType, challengeResponseHeaders\n    - Controls how the webhook responds to verification/challenge requests\n\n**Implementation scenarios**\n\nWebhooks are commonly used for:\n\n1. **Real-time data synchronization**\n    - E-commerce platforms sending order notifications\n    - CRM systems delivering contact updates\n    - Payment processors reporting transaction events\n\n2. **Event-driven processes**\n    - Triggering fulfillment when orders are placed\n    - Initiating approval workflows on document submissions\n    - Executing business logic when status changes occur\n\n3. **System integration**\n    - Connecting SaaS applications without polling\n    - Building composite applications from microservices\n    - Creating fan-out architectures for event distribution\n","properties":{"provider":{"type":"string","description":"Specifies the source application sending webhook data, enabling platform-specific optimizations.\n\n**Field behavior**\n\nThis field determines how the webhook handles incoming requests:\n\n- OPTIONAL: Defaults to \"custom\" if not specified\n- When a specific provider is selected, the system:\n  1. Pre-configures appropriate security settings for that platform\n  2. Applies platform-specific payload parsing rules\n  3. May enable additional features only relevant to that provider\n\n**Implementation guidance**\n\n**Provider-specific configurations**\n\nWhen you know the exact source system, select its specific provider:\n```\n\"provider\": \"shopify\"\n```\n- Automatically configures proper HMAC verification settings\n- Optimizes payload parsing for Shopify's webhook format\n- May enable additional Shopify-specific features\n\n**Custom configuration**\n\nFor generic webhooks or unlisted providers, use custom:\n```\n\"provider\": \"custom\"\n```\n- Requires manual configuration of all security settings\n- Maximum flexibility for handling any webhook format\n- Recommended for custom applications or newer platforms\n\n**Selection criteria**\n\nChoose a specific provider when:\n- The source system is explicitly listed in the enum values\n- You want to leverage pre-configured settings\n- The integration must follow platform-specific practices\n\nChoose \"custom\" when:\n- The source system is not listed\n- You need full control over webhook configuration\n- You're building a custom interface or protocol\n\nIMPORTANT: Some providers enforce specific security methods. When selecting a\nprovider, ensure you have the necessary security credentials (tokens, keys, etc.)\nas required by that platform.\n","enum":["github","shopify","travis","travis-org","slack","dropbox","onfleet","helpscout","errorception","box","stripe","aha","jira","pagerduty","postmark","mailchimp","intercom","activecampaign","segment","recurly","shipwire","surveymonkey","parseur","mailparser-io","hubspot","integrator-extension","custom","sapariba","happyreturns","typeform"]},"verify":{"type":"string","description":"Defines the security verification method used to authenticate incoming webhook requests.\n\n**Field behavior**\n\nThis field is the primary control for webhook security:\n\n- REQUIRED for all webhook exports\n- Determines which additional security fields must be configured\n- Controls how incoming requests are validated before processing\n\n**Verification methods**\n\n**Hmac Verification**\n```\n\"verify\": \"hmac\"\n```\n- Most secure method, cryptographically verifies request integrity\n- REQUIRES: key, algorithm, encoding, header fields\n- Validates a cryptographic signature included in the request header\n- Works well with providers that support HMAC (Shopify, Stripe, GitHub, etc.)\n\n**Token Verification**\n```\n\"verify\": \"token\"\n```\n- Simple verification using a shared secret token\n- REQUIRES: token, path fields\n- Checks for a specific token value in the request body or query params\n- Good for simple scenarios with trusted networks\n\n**Basic Authentication**\n```\n\"verify\": \"basic\"\n```\n- Standard HTTP Basic Authentication\n- REQUIRES: username, password fields\n- Validates credentials sent in the Authorization header\n- Compatible with most HTTP clients and tools\n\n**Public Key**\n```\n\"verify\": \"publickey\"\n```\n- Advanced verification using public key cryptography\n- REQUIRES: key field (containing the public key)\n- Only available for certain providers that use asymmetric cryptography\n- Highest security level but more complex to configure\n\n**Secret url**\n```\n\"verify\": \"secret_url\"\n```\n- Simplest method, relies solely on the obscurity of the URL\n- REQUIRES: token field (the token is embedded in the webhook URL to create a unique, hard-to-guess endpoint)\n- Only suitable for non-sensitive data or testing environments\n- Not recommended for production use with sensitive data\n\nIMPORTANT: Choose the security method that matches your source system's capabilities.\nIf the source system supports multiple verification methods, HMAC is generally the\nmost secure option.\n","enum":["token","hmac","basic","publickey","secret_url"]},"token":{"type":"string","description":"Specifies the shared secret token value used to verify incoming webhook requests.\n\n**Field behavior**\n\nThis field defines the expected token value:\n\n- REQUIRED when verify=\"token\" or verify=\"secret_url\"\n- When verify=\"token\": must be a string that exactly matches what the sender will provide. Used with the path field to locate and validate the token in the request.\n- When verify=\"secret_url\": the token is embedded in the webhook URL to create a unique, hard-to-guess endpoint. Generate a random, high-entropy value.\n- Case-sensitive and whitespace-sensitive\n\n**Implementation guidance**\n\nThe token verification flow works as follows:\n1. The webhook receives an incoming request\n2. The system looks for the token at the location specified by the path field\n3. If the found value exactly matches this token value, the request is processed\n4. If no match is found, the request is rejected with a 401 error\n\n**Security best practices**\n\nFor maximum security:\n- Use a random, high-entropy token (32+ characters)\n- Include a mix of uppercase, lowercase, numbers, and special characters\n- Don't use predictable values like company names or common words\n- Rotate tokens periodically for sensitive integrations\n\n**Common implementations**\n\n```\n\"token\": \"3a7c4f8b2e9d1a5c6b3e7d9f2a1c5b8e\"\n```\n\n```\n\"token\": \"whsec_8fb2e91a5c6b3e7d9f2a1c5b8e3a7c4f\"\n```\n\nIMPORTANT: Never share this token in public repositories or documentation.\nTreat it as a sensitive credential similar to a password.\n"},"algorithm":{"type":"string","description":"Specifies the cryptographic hashing algorithm used for HMAC signature verification.\n\n**Field behavior**\n\nThis field determines how signatures are validated:\n\n- REQUIRED when verify=\"hmac\"\n- Must match the algorithm used by the webhook sender\n- Affects security strength and compatibility\n\n**Algorithm selection**\n\n**Sha-256 (Recommended)**\n```\n\"algorithm\": \"sha256\"\n```\n- Modern, secure hash algorithm\n- Industry standard for most new webhook implementations\n- Preferred choice for all new integrations\n- Used by Shopify, Stripe, and many modern platforms\n\n**Sha-1 (Legacy)**\n```\n\"algorithm\": \"sha1\"\n```\n- Older, less secure algorithm\n- Still used by some legacy systems\n- Only select if the provider explicitly requires it\n- GitHub webhooks used this historically\n\n**Sha-384/sha-512 (High Security)**\n```\n\"algorithm\": \"sha384\"\n\"algorithm\": \"sha512\"\n```\n- Higher security variants with longer digests\n- Use when specified by the provider or for sensitive data\n- Less common but supported by some security-focused systems\n\nIMPORTANT: This MUST match the algorithm used by the webhook sender.\nMismatched algorithms will cause all webhook requests to be rejected.\n","enum":["sha1","sha256","sha384","sha512"]},"encoding":{"type":"string","description":"Specifies the encoding format used for the HMAC signature in webhook requests.\n\n**Field behavior**\n\nThis field determines how signature values are encoded:\n\n- REQUIRED when verify=\"hmac\"\n- Must match the encoding used by the webhook sender\n- Affects how binary signature values are represented as strings\n\n**Encoding options**\n\n**Hexadecimal (hex)**\n```\n\"encoding\": \"hex\"\n```\n- Represents the signature as a string of hexadecimal characters (0-9, a-f)\n- Most common encoding for web-based systems\n- Used by many platforms including Stripe and some Shopify implementations\n- Example output: \"8f7d56a32e1c9b47d882e3aa91341f64\"\n\n**Base64**\n```\n\"encoding\": \"base64\"\n```\n- Represents the signature using base64 encoding\n- More compact than hex (about 33% shorter)\n- Used by platforms like Shopify (newer implementations) and some GitHub scenarios\n- Example output: \"j31WozbhtrHYeC46qRNB9k==\"\n\nIMPORTANT: This MUST match the encoding used by the webhook sender.\nMismatched encoding will cause all webhook requests to be rejected even if\nthe signature is mathematically correct.\n","enum":["hex","base64"]},"key":{"type":"string","description":"Specifies the secret key used to verify cryptographic signatures in incoming webhooks.\n\n**Field behavior**\n\nThis field provides the shared secret for signature verification:\n\n- REQUIRED when verify=\"hmac\" or verify=\"publickey\"\n- Contains the secret value known to both sender and receiver\n- Used with the incoming payload to validate the signature\n- Highly sensitive security credential\n\n**Implementation guidance**\n\n**For hmac verification**\n\nThe key is used in the following verification process:\n1. The webhook receives an incoming request with a signature\n2. The system computes an HMAC of the request body using this key and the specified algorithm\n3. This computed signature is compared with the signature from the request header\n4. If they match exactly, the request is authenticated and processed\n\n**Security best practices**\n\nFor maximum security:\n- Use a random, high-entropy key (32+ characters)\n- Include a mix of characters and avoid dictionary words\n- Never share this key in code repositories or logs\n- Rotate keys periodically for sensitive integrations\n- Use environment variables or secure credential storage\n\n**Common implementations**\n\n```\n\"key\": \"whsec_3a7c4f8b2e9d1a5c6b3e7d9f2a1c5b8e3a7c4f8b\"\n```\n\n```\n\"key\": \"sk_test_51LZIr9B9Y6YIwSKx8647589JKhdjs889KJsk389\"\n```\n\nIMPORTANT: This key should be treated as a highly sensitive credential,\nsimilar to a private key or password. It should never be exposed publicly\nor logged in application logs.\n"},"header":{"type":"string","description":"Specifies the HTTP header name that contains the signature for HMAC verification.\n\n**Field behavior**\n\nThis field identifies where to find the signature in incoming requests:\n\n- REQUIRED when verify=\"hmac\"\n- Must exactly match the header name used by the webhook sender\n- Case-insensitive (HTTP headers are not case-sensitive)\n\n**Common header patterns**\n\n**Platform-specific headers**\n\nMany platforms use standardized header names for their signatures:\n\n```\n\"header\": \"X-Shopify-Hmac-SHA256\"  // For Shopify webhooks\n```\n\n```\n\"header\": \"X-Hub-Signature-256\"    // For GitHub webhooks\n```\n\n```\n\"header\": \"Stripe-Signature\"        // For Stripe webhooks\n```\n\n**Generic signature headers**\n\nFor custom implementations or less common platforms:\n\n```\n\"header\": \"X-Webhook-Signature\"     // Common generic format\n```\n\n```\n\"header\": \"X-Signature\"             // Simplified format\n```\n\n**Implementation notes**\n\n- The system will look for this exact header name in incoming requests\n- If the header is not found, the request will be rejected with a 401 error\n- Some platforms may include a prefix in the header value (e.g., \"sha256=\")\n  which is handled automatically by the system\n\nIMPORTANT: This must exactly match the header name used by the webhook sender.\nIf you're unsure about the correct header name, consult the sender's documentation\nor use a tool like cURL with verbose output to inspect an example request.\n"},"path":{"type":"string","description":"Specifies the location of the verification token in incoming webhook requests.\n\n**Field behavior**\n\nThis field determines where to find the token for verification:\n\n- REQUIRED when verify=\"token\"\n- Defines a JSON path to locate the token in the request body\n- For query parameters, use the appropriate path format (typically at root level)\n\n**Implementation patterns**\n\n**Token in request body**\n\nFor tokens embedded in JSON payloads:\n\n```\n\"path\": \"meta.token\"        // For { \"meta\": { \"token\": \"xyz123\" } }\n```\n\n```\n\"path\": \"verification.key\"  // For { \"verification\": { \"key\": \"xyz123\" } }\n```\n\n**Token at root level**\n\nFor tokens in the top level of the request:\n\n```\n\"path\": \"token\"             // For { \"token\": \"xyz123\", \"data\": {...} }\n```\n\n**Token in query parameters**\n\nFor tokens sent as URL query parameters, use the parameter name:\n\n```\n\"path\": \"verify_token\"      // For /webhook?verify_token=xyz123\n```\n\n**Verification process**\n\n1. The webhook receives an incoming request\n2. The system uses this path to extract the token value\n3. The extracted value is compared with the configured token\n4. If they match exactly, the request is processed\n\nIMPORTANT: The path is case-sensitive and must exactly match the structure\nof incoming requests. For query parameters, the system automatically checks\nboth the body and query string using the provided path.\n"},"username":{"type":"string","description":"Specifies the username for webhook HTTP Basic Authentication security.\n\n**Field behavior**\n\nThis field defines one half of the Basic Authentication credentials:\n\n- REQUIRED when verify=\"basic\"\n- Used in conjunction with the password field\n- Case-sensitive string value\n- Encoded in the standard HTTP Basic Authentication format\n\n**Implementation notes**\n\nWhen Basic Authentication is used, the webhook requires incoming requests to include\nan Authorization header containing \"Basic \" followed by a base64-encoded string of\n\"username:password\".\n\nExample header:\n```\nAuthorization: Basic d2ViaG9va191c2VyOndlYmhvb2tfcGFzc3dvcmQ=\n```\n\nWhere \"d2ViaG9va191c2VyOndlYmhvb2tfcGFzc3dvcmQ=\" is the base64 encoding of\n\"webhook_user:webhook_password\".\n\n**Security considerations**\n\nBasic Authentication:\n- Is widely supported by HTTP clients and servers\n- Should ONLY be used over HTTPS to prevent credential interception\n- Provides a simple authentication mechanism but without integrity verification\n- Is less secure than HMAC verification for webhook scenarios\n\nIMPORTANT: Always use strong, unique credentials rather than generic or easily\nguessable values. Basic Authentication is less secure than HMAC for webhooks\nbut can be appropriate for simple scenarios or when working with systems that\ndon't support more advanced verification methods.\n"},"password":{"type":"string","description":"Specifies the password for webhook HTTP Basic Authentication security.\n\n**Field behavior**\n\nThis field defines the second half of the Basic Authentication credentials:\n\n- REQUIRED when verify=\"basic\"\n- Used in conjunction with the username field\n- Case-sensitive string value\n- Encoded in the standard HTTP Basic Authentication format\n\n**Implementation notes**\n\nThis password is combined with the username and encoded in base64 format for\nthe HTTP Authorization header. The webhook verifies that incoming requests contain\nthe correct encoded credentials before processing them.\n\n**Security best practices**\n\nFor maximum security:\n- Use a strong, randomly generated password (16+ characters)\n- Include a mix of uppercase, lowercase, numbers, and special characters\n- Don't reuse passwords from other systems\n- Avoid dictionary words or predictable patterns\n- Rotate passwords periodically for sensitive integrations\n\nIMPORTANT: This password should be treated as a sensitive credential.\nNever share it in public repositories, documentation, or logs. Always use\nHTTPS for webhooks using Basic Authentication to prevent credential interception.\n"},"successStatusCode":{"type":"integer","description":"Specifies the HTTP status code sent back to webhook callers after successful processing.\n\n**Field behavior**\n\nThis field controls the HTTP response status code:\n\n- OPTIONAL: Defaults to 204 (No Content) if not specified\n- Affects how webhook callers interpret the success response\n- Must be a valid HTTP status code in the 2xx range\n\n**Common status codes**\n\n**204 No Content (Default)**\n```\n\"successStatusCode\": 204\n```\n- Returns no response body\n- Most efficient option as it minimizes response size\n- Appropriate when the caller doesn't need confirmation details\n- Automatically disables successBody (even if specified)\n\n**200 ok**\n```\n\"successStatusCode\": 200\n```\n- Standard success response\n- Allows returning a response body with details\n- Most widely used and recognized success code\n- Compatible with all HTTP clients\n\n**202 Accepted**\n```\n\"successStatusCode\": 202\n```\n- Indicates request was accepted for processing but may not be complete\n- Appropriate for asynchronous processing scenarios\n- Signals that the webhook was received but full processing is pending\n\n**Implementation considerations**\n\nThe appropriate status code depends on your webhook caller's expectations:\n\n- Some systems require specific status codes to consider the delivery successful\n- If the caller retries on anything other than 2xx, use 200 or 202\n- If the caller needs confirmation details, use 200 with a response body\n- If efficiency is paramount, use 204 (default)\n\nIMPORTANT: When using 204 No Content, any successBody configuration will be ignored\nas this status code specifically indicates no response body is being returned.\n","default":204},"successBody":{"type":"string","description":"Specifies the HTTP response body sent back to webhook callers after successful processing.\n\n**Field behavior**\n\nThis field controls the content returned to the webhook caller:\n\n- OPTIONAL: Defaults to empty (no body) if not specified\n- Ignored when successStatusCode is 204 (No Content)\n- Content type is determined by the successMediaType field\n- Can contain static text or structured data (JSON, XML)\n\n**Implementation patterns**\n\n**Simple acknowledgment**\n```\n\"successBody\": \"OK\"\n```\n- Minimal plaintext response\n- Confirms receipt without details\n- Most efficient for basic acknowledgment\n\n**Structured response (JSON)**\n```\n\"successBody\": \"{\\\"success\\\":true,\\\"message\\\":\\\"Webhook received\\\"}\"\n```\n- Provides structured data about the result\n- Can include more detailed status information\n- Compatible with programmatic processing by the caller\n- Remember to escape quotes in JSON strings\n\n**Custom confirmation**\n```\n\"successBody\": \"{\\\"status\\\":\\\"received\\\",\\\"timestamp\\\":\\\"{{currentDateTime}}\\\"}\"\n```\n- Can include dynamic values using handlebars templates\n- Useful for providing receipt confirmation with metadata\n\n**Response flow**\n\nThe response body is sent after the webhook payload has been:\n1. Received and authenticated\n2. Validated against any configured requirements\n3. Accepted for processing by the system\n\nIMPORTANT: The successBody will only be returned if successStatusCode is NOT 204.\nIf you want to return a body, make sure to set successStatusCode to 200, 201, or 202.\n"},"successMediaType":{"type":"string","description":"Specifies the Content-Type header for successful webhook responses.\n\n**Field behavior**\n\nThis field controls how the response body is interpreted:\n\n- OPTIONAL: Defaults to \"json\" if not specified\n- Only relevant when returning a successBody and not using status code 204\n- Determines the Content-Type header in the HTTP response\n- Must be consistent with the actual format of the successBody\n\n**Media type options**\n\n**Json (Default)**\n```\n\"successMediaType\": \"json\"\n```\n- Sets Content-Type: application/json\n- Use when successBody contains valid JSON\n- Most common for API responses\n- Allows structured data that clients can parse programmatically\n\n**Xml**\n```\n\"successMediaType\": \"xml\"\n```\n- Sets Content-Type: application/xml\n- Use when successBody contains valid XML\n- Necessary for systems expecting XML responses\n- Less common in modern APIs but still used in some enterprise systems\n\n**Plain Text**\n```\n\"successMediaType\": \"plaintext\"\n```\n- Sets Content-Type: text/plain\n- Use for simple string responses\n- Most compatible option for basic acknowledgments\n- Appropriate when successBody is just \"OK\" or similar\n\n**Implementation considerations**\n\n- The media type must match the actual content format in successBody\n- If returning JSON in successBody, use \"json\" (most common)\n- If returning a simple text acknowledgment, use \"plaintext\"\n- If the caller specifically requires XML, use \"xml\"\n\nIMPORTANT: When successStatusCode is 204 (No Content), this field has no effect\nsince no body is returned, and therefore no Content-Type is needed.\n","default":"json","enum":["json","xml","plaintext"]},"successResponseHeaders":{"type":"array","description":"Defines custom HTTP headers to include in successful webhook responses.\n\n**Field behavior**\n\nThis field allows additional HTTP headers in the response:\n\n- OPTIONAL: If omitted, only standard headers are included\n- Each entry defines a name/value pair for a single header\n- Applied to all successful responses (regardless of status code)\n- Can override standard headers like Content-Type\n\n**Implementation patterns**\n\n**Standard use cases**\n\nCustom headers are useful for:\n- Providing metadata about the response\n- Enabling CORS for browser-based webhook callers\n- Including tracking or correlation IDs\n- Adding custom security headers\n\n**Common header examples**\n\nCORS support:\n```json\n[\n  {\"name\": \"Access-Control-Allow-Origin\", \"value\": \"*\"},\n  {\"name\": \"Access-Control-Allow-Methods\", \"value\": \"POST, OPTIONS\"}\n]\n```\n\nRequest tracking:\n```json\n[\n  {\"name\": \"X-Request-ID\", \"value\": \"{{jobId}}\"},\n  {\"name\": \"X-Webhook-Received\", \"value\": \"{{currentDateTime}}\"}\n]\n```\n\nCustom application headers:\n```json\n[\n  {\"name\": \"X-API-Version\", \"value\": \"1.0\"},\n  {\"name\": \"X-Processing-Status\", \"value\": \"accepted\"}\n]\n```\n\n**Technical details**\n\n- Header names are case-insensitive as per HTTP specification\n- Some headers like Content-Type can be set via other fields (successMediaType)\n- Headers defined here take precedence over automatically set headers\n- The values can contain handlebars expressions for dynamic content\n\nIMPORTANT: Be careful when setting security-related headers like\nAccess-Control-Allow-Origin, as improper values could create security vulnerabilities.\n","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}}},"challengeResponseHeaders":{"type":"array","description":"Defines custom HTTP headers to include in webhook challenge responses.\n\n**Field behavior**\n\nThis field configures headers for subscription verification:\n\n- OPTIONAL: If omitted, only standard headers are included\n- Only used for webhook verification/challenge requests\n- Each entry defines a name/value pair for a single header\n- Particularly important for platforms requiring specific verification headers\n\n**Challenge verification context**\n\nMany webhook providers implement a verification process:\n1. Before sending real events, they send a \"challenge\" request\n2. The webhook must respond with specific headers and/or body content\n3. Only after successful verification will real webhook events be sent\n\nThis field allows customizing the headers sent during this verification step.\n\n**Common patterns by platform**\n\n**Facebook/Instagram**\n```json\n[\n  {\"name\": \"Content-Type\", \"value\": \"text/plain\"}\n]\n```\n\n**Slack**\n```json\n[\n  {\"name\": \"Content-Type\", \"value\": \"application/json\"}\n]\n```\n\n**Custom implementations**\n```json\n[\n  {\"name\": \"X-Challenge-Response\", \"value\": \"passed\"},\n  {\"name\": \"X-Verification-Status\", \"value\": \"success\"}\n]\n```\n\nIMPORTANT: The specific headers required vary by platform. Consult the webhook\nprovider's documentation for the exact verification requirements. Incorrect challenge\nresponse headers may prevent successful webhook subscription.\n","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}}},"challengeSuccessBody":{"type":"string","description":"Specifies the HTTP response body for webhook challenge/verification requests.\n\n**Field behavior**\n\nThis field defines the verification response content:\n\n- OPTIONAL: If omitted, a default empty response is sent\n- Only used for webhook subscription verification requests\n- Content type is determined by the challengeSuccessMediaType field\n- Often needs to contain specific values expected by the webhook provider\n\n**Verification patterns by platform**\n\nDifferent webhook providers implement different verification mechanisms:\n\n**Facebook/Instagram**\n                \"challengeSuccessBody\": \"{{hub.challenge}}\"\n```\n- Must echo back the challenge value sent in the request\n- Uses handlebars expression to access the challenge parameter\n\n**Slack**\n```\n\"challengeSuccessBody\": \"{\\\"challenge\\\":\\\"{{challenge}}\\\"}\"\n```\n- Returns the challenge value in a JSON structure\n- Required for Slack's Events API verification\n\n**Generic challenge-response**\n```\n\"challengeSuccessBody\": \"{\\\"verified\\\":true,\\\"timestamp\\\":\\\"{{currentDateTime}}\\\"}\"\n```\n- Simple confirmation response for custom implementations\n- Can include additional metadata as needed\n\n**Implementation considerations**\n\n- The exact format is dictated by the webhook provider's requirements\n- Some platforms require echoing back specific request parameters\n- Others require a structured response with specific fields\n- Handlebars expressions ({{variable}}) can access request data\n\nIMPORTANT: Incorrect challenge responses will prevent webhook subscription verification.\nAlways consult the webhook provider's documentation for exact requirements.\n"},"challengeSuccessStatusCode":{"type":"integer","description":"Specifies the HTTP status code for webhook challenge/verification responses.\n\n**Field behavior**\n\nThis field controls the verification response status:\n\n- OPTIONAL: Defaults to 200 (OK) if not specified\n- Only used for webhook subscription verification requests\n- Must match what the webhook provider expects for successful verification\n- Most platforms require a 200 OK response\n\n**Common status codes for verification**\n\n**200 ok (Default)**\n```\n\"challengeSuccessStatusCode\": 200\n```\n- Standard success response\n- Most webhook platforms expect this status code\n- Generally the safest option for verification\n\n**201 Created**\n```\n\"challengeSuccessStatusCode\": 201\n```\n- Used by some systems to indicate subscription was created\n- Less common for verification but used in some custom implementations\n\n**Platform-specific requirements**\n\nMost major webhook providers require specific status codes:\n\n- Facebook/Instagram: 200\n- Slack: 200\n- GitHub: 200\n- Shopify: 200\n- Stripe: 200\n\nIMPORTANT: Using the wrong status code will cause the verification to fail.\nIf you're unsure, keep the default 200 status code, as it's the most widely\naccepted for webhook verifications.\n","default":200},"challengeSuccessMediaType":{"type":"string","description":"Specifies the Content-Type header for webhook challenge/verification responses.\n\n**Field behavior**\n\nThis field controls the challenge response format:\n\n- OPTIONAL: Defaults to \"json\" if not specified\n- Only used for webhook subscription verification requests\n- Determines the Content-Type header in the verification response\n- Must match the format of the challengeSuccessBody content\n\n**Common media types for verification**\n\n**Json (Default)**\n```\n\"challengeSuccessMediaType\": \"json\"\n```\n- Sets Content-Type: application/json\n- Required by Slack and many modern webhook providers\n- Use when returning structured verification data\n\n**Plain Text**\n```\n\"challengeSuccessMediaType\": \"plaintext\"\n```\n- Sets Content-Type: text/plain\n- Required by Facebook/Instagram webhook verification\n- Use when the challenge response is a simple string\n\n**Xml**\n```\n\"challengeSuccessMediaType\": \"xml\"\n```\n- Sets Content-Type: application/xml\n- Less common but used by some enterprise systems\n- Use only when the webhook provider specifically requires XML\n\n**Platform-specific requirements**\n\n- Facebook/Instagram: plaintext (when echoing hub.challenge)\n- Slack: json (for Events API verification)\n- Most modern APIs: json\n\nIMPORTANT: The media type must match both the format of your challengeSuccessBody\nand the requirements of the webhook provider. Mismatched content types can cause\nverification to fail even if the response body is correct.\n","default":"json","enum":["json","xml","plaintext"]}}},"Distributed":{"type":"object","description":"Configuration object for distributed exports that require authentication.\n\nThis object contains authentication credentials needed for distributed processing.\n","properties":{"bearerToken":{"type":"string","description":"Bearer token for authenticating distributed export requests.\n\n**Field behavior**\n\nThis token provides authentication for the distributed export:\n\n- Required for secure access to distributed endpoints\n- Must be a valid bearer token format\n- Used in Authorization header as \"Bearer {token}\"\n- Should be kept secure and rotated regularly\n\n**Implementation guidance**\n\n**Token management**\n\n- Store tokens securely (encrypted at rest)\n- Implement token rotation policies\n- Monitor token expiration dates\n- Use environment variables for token storage\n\n**Security considerations**\n\n- Never log bearer tokens in plain text\n- Implement proper access controls\n- Use HTTPS for all token transmissions\n- Validate tokens on each request\n\nIMPORTANT: Bearer tokens provide full access to the distributed export.\nTreat them as sensitive credentials.\n","format":"password"}},"required":["bearerToken"],"additionalProperties":false},"FileSystem":{"type":"object","description":"Configuration for FileSystem exports","properties":{"directoryPath":{"type":"string","description":"Directory path to retrieve files from (required)"}},"required":["directoryPath"]},"File":{"type":"object","description":"Configuration for file processing in exports. This object defines how files are parsed,\nfiltered, and processed across all file-based export operations within Celigo.\n\n**Export contexts**\n\nThis schema applies to multiple file-based export scenarios:\n\n1. **Source System Types**:\n    - Simple exports with file uploads through the UI\n    - HTTP exports retrieving files from web sources\n    - FTP/SFTP exports downloading files from servers\n    - Amazon S3\n    - Azure Blob Storage\n    - Google Cloud Storage\n    - And other file-based source systems\n\n**Implementation guidelines**\n\nAI agents should consider these key decision points when configuring file processing for exports:\n\n1. **File Format Selection**: Set the `type` field to match the format of the files being processed\n    (csv, json, xlsx, xml). This determines which format-specific configuration object to populate.\n\n2. **Processing Mode**: Set the `output` field based on whether you need to:\n    - Parse file contents into records (`\"records\"`)\n    - Transfer files without parsing (`\"blobKeys\"`)\n    - Only retrieve metadata about files (`\"metadata\"`)\n\n3. **File Filtering**: Use the `filter` object to selectively process files based on criteria\n    like file names, sizes, or custom logic.\n\n4. **Format-Specific Configuration**: Configure the corresponding object (csv, json, xlsx, xml)\n    based on the selected file type.\n\n**Field dependencies**\n\n- When `type` = \"csv\", configure the `csv` object\n- When `type` = \"json\", configure the `json` object\n- When `type` = \"xlsx\", configure the `xlsx` object\n- When `type` = \"xml\", configure the `xml` object\n- When `type` = \"filedefinition\", configure the `fileDefinition` object\n\n**Export-specific considerations**\n\nWhile the file processing configuration remains consistent, different export types may have\nadditional requirements:\n\n- **HTTP Exports**: May need authentication and specific endpoint configurations\n- **FTP/SFTP Exports**: Require server credentials and path information\n- **Cloud Storage Exports**: Need bucket/container details and access credentials\n\nThe File schema focuses specifically on how files are processed once they are\nretrieved from the source system, regardless of which export type is used.\n","properties":{"encoding":{"type":"string","description":"Character encoding used for reading and parsing file content. This setting is critical for ensuring proper character interpretation, especially for international data and special characters.\n\n**Encoding options and usage guidance**\n\n**Utf-8 (`\"utf8\"`)**\n- **Default Setting**: Used if no encoding is specified\n- **Best For**: Modern text files, international character sets, XML/JSON files\n- **Compatibility**: Universally supported; standard for web applications\n- **When to Use**: First choice for most new integrations; handles most languages\n\n**Windows-1252 (`\"win1252\"`)**\n- **Best For**: Legacy Windows system files, older Western European data\n- **Compatibility**: Common in Windows-based exports, especially older systems\n- **When to Use**: When files originate from older Windows systems or contain certain special characters not rendering properly in utf8\n\n**Utf-16le (`\"utf-16le\"`)**\n- **Best For**: Unicode text with extensive character requirements\n- **Compatibility**: Microsoft Word documents, some database exports\n- **When to Use**: When files have Byte Order Mark (BOM) or are known to be 16-bit Unicode\n\n**Gb18030 (`\"gb18030\"`)**\n- **Best For**: Chinese character sets\n- **Compatibility**: Official character set standard for China\n- **When to Use**: For files containing simplified or traditional Chinese characters\n\n**Mac Roman (`\"macroman\"`)**\n- **Best For**: Legacy Mac system files (pre-OS X)\n- **Compatibility**: Older Apple systems and applications\n- **When to Use**: For older files created on Apple systems\n\n**Iso-8859-1 (`\"iso88591\"`)**\n- **Best For**: Western European languages\n- **Compatibility**: Widely supported in older systems\n- **When to Use**: For legacy European language content\n\n**Shift jis (`\"shiftjis\"`)**\n- **Best For**: Japanese character sets\n- **Compatibility**: Common in Japanese Windows and older systems\n- **When to Use**: For files containing Japanese text\n\n**Implementation guidance for ai agents**\n\n1. **Detection Strategy**: If encoding is unknown, first try utf8 (default), then try win1252 for Western language files with errors\n\n2. **Encoding Selection Process**:\n    - Check source system documentation for encoding specifications\n    - For files with corrupt/missing characters, test alternative encodings\n    - Consider geographic origin of data (Asian languages often require specific encodings)\n\n3. **Common Issues to Watch For**:\n    - Mojibake (garbled text): Indicates wrong encoding selection\n    - Question marks or boxes: Character conversion failures\n    - BOM markers appearing as visible characters: Consider utf-16le\n","enum":["utf8","win1252","utf-16le","gb18030","macroman","iso88591","shiftjis"]},"type":{"type":"string","description":"Defines the format of the files being processed. REQUIRED for all file-based exports except blob exports (export type \"blob\" or file output \"blobKeys\").\n\nThis field creates a critical dependency that determines which format-specific configuration object must be populated.\n\n**Format options and requirements**\n\n**Csv Files (`\"csv\"`)**\n- **Use For**: Tabular data with delimiter-separated values\n- **Required Config**: The `csv` object with settings like delimiters and header options\n- **Best For**: Simple tabular data, exports from spreadsheets, flat data structures\n- **Example Sources**: Exported reports, data extracts, simple database dumps\n\n**Json Files (`\"json\"`)**\n- **Use For**: Hierarchical data in JavaScript Object Notation\n- **Required Config**: The `json` object, especially the `resourcePath` to locate records\n- **Best For**: Nested data structures, API responses, complex object representations\n- **Example Sources**: REST APIs, document databases, configuration files\n\n**Excel Files (`\"xlsx\"`)**\n- **Use For**: Microsoft Excel spreadsheets\n- **Required Config**: The `xlsx` object with Excel-specific settings\n- **Best For**: Business reports, formatted tabular data, multi-sheet documents\n- **Example Sources**: Financial reports, manually created spreadsheets\n\n**Xml Files (`\"xml\"`)**\n- **Use For**: Extensible Markup Language documents\n- **Required Config**: The `xml` object, critically the `resourcePath` using XPath\n- **Best For**: Document-oriented data, SOAP responses, EDI formats\n- **Example Sources**: SOAP APIs, legacy system exports, industry standard formats\n\n**File Definition (`\"filedefinition\"`)**\n- **Use For**: Complex proprietary formats requiring custom parsing logic\n- **Required Config**: The `fileDefinition` object with the _fileDefinitionId\n- **Best For**: Legacy formats, fixed-width files, complex multi-record formats\n- **Example Sources**: Mainframe exports, proprietary formats, EDI documents\n\n**Implementation guidance**\n\n1. Determine the file format from the source system or documentation\n2. Select the matching type from the enum values\n3. Configure ONLY the corresponding format-specific object\n4. Other format-specific objects will be ignored\n\nFor AI agents: This field creates a critical dependency chain - selecting a type\ncommits you to using the corresponding configuration object.\n","enum":["csv","json","xlsx","xml","filedefinition"]},"output":{"type":"string","description":"Defines the fundamental processing mode for file data. This critical field determines how files are handled and what data is passed to subsequent flow steps.\n\n**Processing modes**\n\n**Content Processing (`\"records\"`)**\n- **Behavior**: Files are parsed into structured records based on their format\n- **Use When**: You need to access and manipulate the data inside files\n- **Output**: Array of record objects reflecting the file's content\n- **Example Flow**: CSV files → Parse into records → Transform → Import to target system\n- **Best For**: Data synchronization, ETL processes, content-based workflows\n- **Technical Impact**: Requires format-specific parsing; higher processing overhead\n\n**File Transfer (`\"blobKeys\"`)**\n- **Behavior**: Files are treated as binary objects and transferred without parsing\n- **Use When**: You need to move files between systems without modifying content\n- **Output**: References to the binary file objects (blobKeys)\n- **Example Flow**: Image files → Transfer as blobs → Upload to cloud storage\n- **Best For**: Binary files, images, documents, any non-textual content\n- **Technical Impact**: Lower processing overhead; maintains file integrity\n\n**File Discovery (`\"metadata\"`)**\n- **Behavior**: Only file metadata is retrieved (name, size, dates) without content\n- **Use When**: You need to inventory files before deciding which to process\n- **Output**: Array of file metadata objects\n- **Example Flow**: Scan FTP folder → Get metadata → Filter by date → Process selected files\n- **Best For**: File inventory, selective processing, large directory scanning\n- **Technical Impact**: Minimal processing overhead; fastest operation mode\n\n**Implementation guidance**\n\nThis setting profoundly affects flow architecture:\n\n1. For data integration: Use `\"records\"` to work with the file contents\n2. For file movement: Use `\"blobKeys\"` to preserve binary integrity\n3. For file discovery: Use `\"metadata\"` as a first step before selective processing\n\nAI agents should select this value based on whether the integration needs to\nwork with the file's content or just move/manage the files themselves.\n","enum":["records","metadata","blobKeys"]},"skipDelete":{"type":"boolean","description":"Controls whether source files are retained or deleted after successful processing. This setting has significant implications for data lifecycle management and system storage.\n\n**Behavior**\n\n- **When true**: Source files remain on the file server after processing\n- **When false** (default): Source files are automatically deleted after successful processing\n- **Error Handling**: Files are only deleted after SUCCESSFUL processing; failed files remain intact\n\n**Decision factors for ai agents**\n\nConsider recommending `skipDelete: true` when:\n\n1. **Compliance Requirements**:\n    - Regulatory frameworks require source file retention (GDPR, HIPAA, SOX)\n    - Audit trails need to maintain original file evidence\n    - Data retention policies mandate preserving source files\n\n2. **Operational Needs**:\n    - Files need to be processed by multiple different flows\n    - Source files serve as disaster recovery backups\n    - Re-processing might be required (for testing or validation)\n    - Source systems do not maintain their own copy of the files\n\nConsider recommending `skipDelete: false` (default) when:\n\n1. **Storage Optimization**:\n    - Working with large files that would consume significant storage\n    - High volume of files processed frequently\n    - Files are already backed up elsewhere\n    - Storage costs are a concern\n\n2. **Security Considerations**:\n    - Files contain sensitive data that should be minimized\n    - \"Clean workspace\" policies are in place\n    - Source files represent a potential security liability\n\n**Implementation guidance**\n\n- **Storage Planning**: When `skipDelete: true`, ensure sufficient storage is available for file accumulation\n- **File Organization**: Consider implementing an archiving strategy for retained files\n- **Monitoring**: Set up space monitoring when retaining files to prevent storage exhaustion\n- **Cleanup Automation**: If files must be retained but eventually deleted, consider a separate cleanup job\n\n**Integration patterns**\n\n- **Multi-stage Processing**: Set to `true` for files that need multi-step processing in separate flows\n- **Extract-Transform-Archive**: Set to `true` when original files need archiving after extraction\n- **Single-use Import**: Set to `false` for one-time imports where originals have no further value\n\n**Technical considerations**\n\nThis setting only affects the source file server. Records extracted from the files and processed through the flow are not affected by this setting - they continue through your integration regardless of this value.\n"},"compressionFormat":{"type":"string","description":"Specifies the compression format of the files being processed. This setting enables the system to automatically decompress files before parsing their contents.\n\n**Compression options**\n\n**Gzip (`\"gzip\"`)**\n- **File Extension**: Typically .gz, .gzip\n- **Characteristics**: Single-file compression, maintains original file name in metadata\n- **Compression Ratio**: Moderate to high, depends on file type (5-75% size reduction)\n- **Common Sources**: Linux/Unix systems, database exports, API response payloads\n- **Use Cases**: Individual file transfers, API response handling, log files\n\n**Zip (`\"zip\"`)**\n- **File Extension**: .zip\n- **Characteristics**: Archive format that can contain multiple files/directories\n- **Compression Ratio**: Moderate (usually 30-60% size reduction)\n- **Common Sources**: Windows systems, manual exports, email attachments\n- **Use Cases**: Multi-file packages, email attachments, mixed-format content\n\n**Implementation guidance for ai agents**\n\n**When to Configure Compression**\n\n1. **Source System Behavior**:\n    - Set when the source system always delivers compressed files\n    - Leave blank when files are delivered uncompressed\n    - NEVER set when files are sometimes compressed, sometimes not (this will cause errors)\n\n2. **Selection Criteria**:\n    - Examine file extensions (.zip, .gz) in the source system\n    - Check source system documentation for compression specifications\n    - Consider typical OS of the source (.zip for Windows, .gz for Unix/Linux)\n\n3. **Multi-file Considerations**:\n    - For .zip files containing multiple files, all files will be processed individually\n    - For nested compression (e.g., .gz files inside .zip), only the outer compression is handled\n\n**Technical Implementation Notes**\n\n- **Processing Flow**: Files are decompressed before any other processing (parsing, filtering)\n- **Filename Handling**: After decompression, the contained file name(s) are used for subsequent operations\n- **Error Conditions**: If a file is marked as compressed but isn't actually compressed, processing will fail\n- **Mixed Formats**: If files might arrive in different compression formats, use separate exports for each format\n\n**Performance Considerations**\n\n- **Network Efficiency**: Compressed files reduce transfer time from source to integration platform\n- **Processing Overhead**: Decompression adds slight processing time but saves network time\n- **Storage Impact**: Compressed files use less storage in transit but are decompressed for processing\n\nLeave this field blank if files are not compressed. Setting an incorrect compression format will cause processing errors.\n","enum":["gzip","zip"]},"purgeInternalBackup":{"type":"boolean","description":"Controls whether Celigo's internal backup system retains copies of processed files. This setting affects data retention, recovery capabilities, and compliance posture.\n\n**Behavior**\n\n- **When false** (default): Celigo maintains copies of all processed files for the duration of your account's retention policy\n- **When true**: Celigo will NOT keep internal backup copies of files processed by this specific export\n- **Scope**: This setting applies only to this specific export configuration; other exports are unaffected\n\n**Decision factors for ai agents**\n\n**Scenarios to Recommend `purgeInternalBackup: true`**\n\n1. **Data Sensitivity Requirements**:\n    - Files contain highly sensitive information (PII, PHI, financial, etc.)\n    - Data residency/sovereignty requirements prohibit additional copies\n    - Zero-retention policies mandate immediate deletion after processing\n    - Compliance frameworks require minimizing data copies (GDPR, HIPAA)\n\n2. **Technical Considerations**:\n    - Very large files where storage costs are significant\n    - Files that are already reliably backed up in source systems\n    - Files with very short-lived relevance (e.g., temporary processing files)\n    - Processing of non-production/test data that doesn't require retention\n\n**Scenarios to Recommend `purgeInternalBackup: false` (Default)**\n\n1. **Recovery Requirements**:\n    - Files represent critical business data with recovery needs\n    - Source systems don't maintain reliable backups\n    - Reprocessing capabilities are needed for disaster recovery\n    - Audit trails require evidence of processed files\n\n2. **Operational Benefits**:\n    - Troubleshooting integration issues requires access to source files\n    - Files might need reprocessing in case of downstream errors\n    - Historical analysis or validation may be required\n    - Protection against source system data loss\n\n**Implementation guidance**\n\n**Governance Considerations**\n\n- **Data Lifecycle**: Setting to `true` permanently removes files from Celigo after processing\n- **Recovery Impact**: Without backups, recovery from certain errors may require re-obtaining files from source systems\n- **Audit Trail**: Consider if processed files need to be available for future audits or investigations\n\n**Best Practices**\n\n- **Document Decision**: When setting to `true`, document the rationale for disabling backups\n- **Retention Alignment**: Ensure this setting aligns with overall data retention policies\n- **Risk Assessment**: Evaluate recovery needs against data minimization requirements\n- **Consistency**: Apply consistent backup settings across similar data types\n\n**System Impact**\n\nThis setting does NOT affect:\n- The processing of files during integration runs\n- Source files on their original servers (see `skipDelete` for that)\n- Storage of processed data records in the target system\n\nIt ONLY controls whether Celigo maintains internal copies of the original files.\n"},"decrypt":{"type":"string","description":"Specifies the decryption method to apply to incoming files before processing. This setting enables handling of encrypted files that require decryption before their contents can be parsed.\n\n**Supported encryption**\n\n**Pgp/gpg Encryption (`\"pgp\"`)**\n- **File Extensions**: Typically .pgp, .gpg, or .asc\n- **Encryption Standard**: OpenPGP (RFC 4880)\n- **Key Requirements**: Private key must be configured on the connection\n- **Common Sources**: Secure file transfers, encrypted backups, confidential data exchanges\n\n**Implementation requirements**\n\n1. **Connection Configuration Prerequisites**:\n    - This field assumes the connection has already been configured with appropriate cryptographic settings\n    - Private key must be uploaded to the connection configuration\n    - Passphrase (if applicable) must be configured on the connection\n    - For asymmetric encryption, the corresponding public key must have been used to encrypt the files\n\n2. **File Processing Flow**:\n    - Encrypted files are first retrieved from the source\n    - Decryption is applied using the configured connection's cryptographic settings\n    - After successful decryption, normal file processing continues (parsing, filtering, etc.)\n    - If decryption fails, the file processing will error out completely\n\n**Guidance for ai agents**\n\n**When to Configure Decryption**\n\n1. **Security Requirements**:\n    - Set to \"pgp\" when source files are PGP/GPG encrypted\n    - Required for end-to-end encrypted data transfers\n    - Common in financial, healthcare, and other industries with sensitive data\n    - Essential for compliance with certain data protection regulations\n\n2. **Technical Indicators**:\n    - File extensions indicate encryption (.pgp, .gpg, .asc)\n    - Source system documentation mentions PGP encryption\n    - Files cannot be opened with standard text editors\n    - Source system provides a public key for encryption\n\n**Implementation Considerations**\n\n- **Key Management**: Ensure private keys are securely stored and properly configured\n- **Error Handling**: Decryption failures will cause the entire file processing to fail\n- **Performance Impact**: Decryption adds processing overhead before file parsing begins\n- **Debugging Challenges**: Encrypted files cannot be easily examined for troubleshooting\n\n**Security Best Practices**\n\n- **Key Rotation**: Recommend periodic key rotation according to security policies\n- **Passphrase Protection**: Use strong passphrases for private keys when possible\n- **Access Control**: Limit access to connections with decryption capabilities\n- **Audit Logging**: Enable detailed logging for decryption operations when available\n\n**Integration with other settings**\n\n- If files are both encrypted AND compressed, decryption happens before decompression\n- Subsequent processing (based on file type settings) occurs after decryption\n- Internal backups (controlled by purgeInternalBackup) store the decrypted files unless configured otherwise\n\nCurrently, only PGP/GPG encryption is supported. For other encryption methods, custom preprocessing may be required.\n","enum":["pgp"]},"batchSize":{"type":"integer","description":"Controls the number of files processed in a single batch operation. This setting allows fine-tuning of performance and resource utilization during file processing.\n\n**Behavior and purpose**\n\n- **Function**: Limits the number of files processed in a single batch request\n- **Default**: If not specified, the system uses a default batch size based on file type\n- **Maximum**: 1000 files per batch (hard system limit)\n- **Impact**: Affects performance, memory usage, and error resilience, but NOT total processing capacity\n\n**Performance optimization guidance**\n\n**Large File Optimization (Set Lower Values: 10-50)**\n\nWhen working with large files (>10MB each), smaller batch sizes are recommended:\n\n- **Network Benefits**: Reduces timeout risks during file transfer\n- **Memory Usage**: Prevents excessive memory consumption\n- **Error Isolation**: Limits the impact of processing failures\n- **Example Scenarios**: Document processing, image files, complex spreadsheets\n\n```\n\"batchSize\": 20  // Good setting for large PDF or image files\n```\n\n**Small File Optimization (Set Higher Values: 100-1000)**\n\nWhen working with small files (<1MB each), larger batch sizes improve efficiency:\n\n- **Throughput**: Processes more files with less overhead\n- **API Efficiency**: Reduces the number of API calls\n- **Resource Utilization**: Maximizes processing efficiency\n- **Example Scenarios**: Small CSV files, transaction records, simple data files\n\n```\n\"batchSize\": 500  // Efficient for small data files\n```\n\n**Implementation guidance for ai agents**\n\n**Recommendation Framework**\n\n1. **File Size Assessment**:\n    - For files averaging >10MB: Recommend 10-20\n    - For files averaging 1-10MB: Recommend 20-100\n    - For files averaging <1MB: Recommend 100-500\n    - For very small files (<100KB): Consider maximum (1000)\n\n2. **Reliability Factors**:\n    - For critical data with no retry capability: Recommend lower values\n    - For unstable network connections: Recommend lower values\n    - For production environments: Start conservative (lower) and increase based on performance\n    - For development/testing: Can use higher values for efficiency\n\n3. **System Constraints**:\n    - Consider available memory in the integration environment\n    - Evaluate network bandwidth and stability\n    - Account for source system rate limits or concurrent connection limits\n\n**Technical considerations**\n\n- **Error Handling**: If a batch fails, only that batch is retried (not individual files)\n- **Parallelism**: Batch size affects concurrent processing but within system limits\n- **Monitoring**: Larger batch sizes make monitoring individual file progress more difficult\n- **Resource Scaling**: Higher batch sizes require more memory but can complete faster\n\n**Relationship to other settings**\n\n- This setting controls file retrieval batching, not record processing batch size\n- Works in conjunction with compression and decryption settings\n- Separate from and complementary to the main flow's pageSize setting\n\nConsider starting with more conservative (lower) values and increasing based on performance monitoring.\n","maximum":1000},"sortByFields":{"type":"array","description":"Allows you to sort all records in a file before processing them. This configuration enables deterministic ordering of records, which can be critical for maintaining data consistency and enabling specific processing patterns.\n\n**Functionality overview**\n\n- **Purpose**: Establishes a specific processing order for records within files\n- **Timing**: Sorting is applied after file parsing but before any filtering or grouping\n- **Scope**: Affects only the in-memory representation of records (doesn't modify source files)\n- **Performance**: Has computational cost proportional to number of records × log(number of records)\n\n**Strategic uses for ai agents**\n\n**Business Process Optimization**\n\n1. **Chronological Processing**:\n    - Sort by date/timestamp fields to process events in time order\n    - Essential for financial transactions, audit logs, event sequences\n    - Example: `[{\"field\": \"transactionDate\", \"descending\": false}]`\n\n2. **Hierarchical Data Handling**:\n    - Sort by parent records before children\n    - Ensures referential integrity in relational data\n    - Example: `[{\"field\": \"parentId\", \"descending\": false}, {\"field\": \"lineNumber\", \"descending\": false}]`\n\n3. **Priority-Based Processing**:\n    - Sort by importance/priority fields to handle critical items first\n    - Useful for SLA-driven processes, tiered operations\n    - Example: `[{\"field\": \"priority\", \"descending\": true}, {\"field\": \"createdDate\", \"descending\": false}]`\n\n**Technical Optimization**\n\n1. **Grouping Efficiency**:\n    - Sorting by the same fields used in groupByFields improves grouping performance\n    - Reduces memory usage when processing large files\n    - Example: `[{\"field\": \"customerId\", \"descending\": false}]` with corresponding groupByFields\n\n2. **Lookup Optimization**:\n    - Sorting by reference fields enhances performance of subsequent lookups\n    - Minimizes database calls by enabling batch lookups\n    - Example: `[{\"field\": \"productSku\", \"descending\": false}]`\n\n3. **Error Reduction**:\n    - Sorting can ensure dependencies are processed in correct order\n    - Reduces failures from out-of-sequence processing\n    - Example: `[{\"field\": \"sequenceNumber\", \"descending\": false}]`\n\n**Implementation guidance**\n\n**Field Selection Considerations**\n\n- **Data Type Compatibility**: Fields must contain comparable values (dates, numbers, strings)\n- **Nulls Handling**: Null values are typically sorted last (after all non-null values)\n- **Nested Fields**: Use dot notation for accessing nested properties (`customer.region`)\n- **Performance Impact**: Each additional sort field increases computational cost\n\n**Common Implementation Patterns**\n\n```json\n// Simple single-field ascending sort (most common)\n[\n  {\"field\": \"orderDate\", \"descending\": false}\n]\n\n// Multi-field sort with primary and secondary criteria\n[\n  {\"field\": \"region\", \"descending\": false},\n  {\"field\": \"revenue\", \"descending\": true}\n]\n\n// Descending priority sort with tie-breaker\n[\n  {\"field\": \"priority\", \"descending\": true},\n  {\"field\": \"createdDate\", \"descending\": false}\n]\n```\n\n**Limitations and Constraints**\n\n- Sorting large datasets has memory implications; consider record volume\n- Maximum recommended number of sort fields: 3-5 (performance considerations)\n- Sorting effectiveness depends on data consistency in source files\n- Complex sorting logic might be better implemented in custom scripts\n","items":{"type":"object","properties":{"field":{"type":"string","description":"Specifies the record field to use as a sort key. This field name identifies which property of each record will be used for comparison when establishing processing order.\n\n**Field selection guidelines**\n\n**Data Type Considerations**\n\n- **Date/Time Fields**: Provide chronological sorting (`createdDate`, `timestamp`)\n- **Numeric Fields**: Enable quantitative ordering (`amount`, `sequenceNumber`, `priority`)\n- **String Fields**: Sort alphabetically (`name`, `status`, `category`)\n- **Boolean Fields**: Group records by true/false values (`isActive`, `isProcessed`)\n\n**Accessing Field Paths**\n\n- **Top-level Properties**: Direct field names (`orderNumber`, `date`)\n- **Nested Objects**: Use dot notation (`customer.name`, `address.country`)\n- **Array Elements**: Not directly supported in basic sorting; use preprocessing\n\n**Common Field Patterns by Domain**\n\n1. **Order Processing**:\n    - `orderDate`, `orderNumber`, `customerId`, `lineNumber`\n\n2. **Financial Data**:\n    - `transactionDate`, `accountNumber`, `amount`, `documentNumber`\n\n3. **Customer Records**:\n    - `lastName`, `firstName`, `customerType`, `region`\n\n4. **Inventory/Products**:\n    - `productCategory`, `itemNumber`, `stockLevel`, `reorderDate`\n\n5. **Event Logs**:\n    - `timestamp`, `severity`, `eventType`, `sourceSystem`\n\n**Implementation notes**\n\n- Field names are case-sensitive\n- Fields must exist in all records (or have consistent representation when missing)\n- Non-existent fields or null values are typically sorted last\n- Maximum recommended field name length: 64 characters\n"},"descending":{"type":"boolean","description":"Controls the sort direction for the specified field. This setting determines whether records will be arranged in ascending (lowest to highest) or descending (highest to lowest) order.\n\n**Behavior**\n\n- **When false or omitted**: Sorts in ascending order (A→Z, 0→9, oldest→newest)\n- **When true**: Sorts in descending order (Z→A, 9→0, newest→oldest)\n\n**Strategic direction selection**\n\n**Ascending Order (descending: false)**\n\nRecommended for:\n- Chronological event processing (earliest first)\n- Sequential operations with dependencies\n- Reference data that builds on previous records\n- Incremental ID or sequence numbers\n\nExample use cases:\n- Processing dated transactions in chronological order\n- Handling items in order of creation\n- Incrementally building state that depends on previous records\n\n**Descending Order (descending: true)**\n\nRecommended for:\n- Priority-based processing (highest first)\n- Recent-first temporal processing\n- Most significant items first\n- Limited processing where only top N items matter\n\nExample use cases:\n- Processing high-priority items before low-priority\n- Handling most recent updates first\n- Focusing on highest-value transactions first\n\n**Implementation patterns**\n\n**Single Field Direction**\n\n```json\n{\"field\": \"createdDate\", \"descending\": false}  // Oldest first\n{\"field\": \"createdDate\", \"descending\": true}   // Newest first\n```\n\n**Mixed Directions in Multi-field Sorts**\n\n```json\n// Group by category (A→Z) but show highest priority first in each category\n[\n  {\"field\": \"category\", \"descending\": false},\n  {\"field\": \"priority\", \"descending\": true}\n]\n```\n\n**Technical considerations**\n\n- Default value is `false` if omitted (ascending sort)\n- For date fields, ascending means oldest first\n- For numeric fields, ascending means smallest first\n- For string fields, ascending means alphabetical order\n"}}}},"groupByFields":{"$ref":"#/components/schemas/GroupBy"},"csv":{"type":"object","description":"Configuration settings for parsing CSV (Comma-Separated Values) files. This object defines how the system interprets delimited text files, handling variations in format, structure, and content.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"csv\". This configuration is required for properly parsing:\n- Standard CSV files (.csv)\n- Tab-delimited files (.tsv, .tab)\n- Other character-delimited files (semicolon, pipe, etc.)\n- Fixed-width text files converted to delimited format\n\n**Implementation strategy for ai agents**\n\n1. **Format Analysis**:\n    - Examine sample files to identify delimiter pattern\n    - Check for presence/absence of header row\n    - Look for whitespace or quote pattern inconsistencies\n    - Identify any rows that should be skipped (headers, metadata, etc.)\n\n2. **Configuration Priority**:\n    - `columnDelimiter`: Most critical setting; incorrect delimiter causes parsing failures\n    - `hasHeaderRow`: Affects field mapping and identification\n    - `rowDelimiter`: Usually auto-detected but important for non-standard files\n    - `trimSpaces`: Important for inconsistent formatting\n    - `rowsToSkip`: Necessary when files contain metadata/comments before data\n\n3. **Common File Source Patterns**:\n\n    | Source System | Typical Delimiter | Header Row | Common Issues |\n    |--------------|-------------------|------------|---------------|\n    | Excel (US)   | Comma (,)         | Yes        | Quoted fields with embedded commas |\n    | Excel (EU)   | Semicolon (;)     | Yes        | Decimal separator conflicts |\n    | Legacy Systems | Pipe (\\|) or Tab | Varies     | Inconsistent field counts |\n    | POS Systems  | Comma or Tab      | Often No   | Trailing delimiters |\n    | ERP Exports  | Varies widely     | Usually Yes| Fixed field counts with padding |\n\n**Error prevention**\n\n- **Misaligned Columns**: Usually caused by incorrect delimiter or quotes handling\n- **Truncated Data**: Can result from wrong row delimiter settings\n- **Field Misinterpretation**: Often caused by incorrect header row setting\n- **Character Encoding Issues**: Address with the parent `encoding` setting\n- **Whitespace Problems**: Resolve with `trimSpaces` setting\n\n**Optimization opportunities**\n\n- For maximum parsing speed, set only the minimal required settings\n- For problematic files with inconsistent formatting, use more restrictive settings\n- Balance between permissive parsing (more data accepted) and strict validation (cleaner data)\n","properties":{"columnDelimiter":{"type":"string","description":"Specifies the character sequence that separates individual fields (columns) within each row of the CSV file.\n\n**Behavior**\n\n- Controls how the parser identifies individual fields in each row\n- Default value: comma (,) if not specified\n- Special characters may need to be escaped\n\n**Common delimiter patterns**\n\n**Standard csv (`,`)**\n```\n\"columnDelimiter\": \",\"\n```\n- Most common format in US/UK systems\n- Default for most spreadsheet exports\n- Used by: Microsoft Excel (US), Google Sheets, many database exports\n\n**European csv (`;`)**\n```\n\"columnDelimiter\": \";\"\n```\n- Common in European locales where comma is the decimal separator\n- Standard format in many EU countries\n- Used by: Microsoft Excel (many EU locales), European business systems\n\n**Tab-Delimited (`\\t`)**\n```\n\"columnDelimiter\": \"\\t\"\n```\n- Used for tab-separated values (TSV) files\n- Better for data containing commas\n- Used by: Database exports, scientific data, legacy systems\n\n**Other Common Delimiters**\n- Pipe: `\"columnDelimiter\": \"|\"` (used in mainframes, legacy systems)\n- Colon: `\"columnDelimiter\": \":\"` (less common, specialized formats)\n- Space: `\"columnDelimiter\": \" \"` (uncommon, problematic with text fields)\n\n**Determination strategy for ai agents**\n\n1. **File Extension Check**:\n    - .csv → Usually comma (,)\n    - .tsv → Always tab (\\t)\n    - .txt → Could be any delimiter; needs inspection\n\n2. **Source System Analysis**:\n    - EU-based systems often use semicolon (;)\n    - Legacy/mainframe systems often use pipe (|)\n    - Scientific/statistical data often uses tab (\\t)\n\n3. **File Content Inspection**:\n    - Open file in text editor to identify separating character\n    - Check for character frequency patterns\n    - Look for consistent character between data elements\n\n4. **System Documentation**:\n    - Check export settings in source system\n    - Review file specifications if available\n\n**Implementation notes**\n\n- For tab delimiter, use `\"\\t\"` (escape sequence for tab)\n- If file contains the delimiter within text fields, ensure proper quoting\n- Multi-character delimiters are supported but rare\n- Setting the wrong delimiter is the most common parsing error\n"},"rowDelimiter":{"type":"string","description":"Specifies the character sequence that indicates the end of each record (row) in the CSV file.\n\n**Behavior**\n\n- Controls how the parser identifies the boundaries between records\n- Default: Auto-detect (system attempts to determine from file content)\n- Common values: newline (`\\n`), carriage return + newline (`\\r\\n`)\n\n**Common row delimiter patterns**\n\n**Windows-Style (`\\r\\n`)**\n```\n\"rowDelimiter\": \"\\r\\n\"\n```\n- CRLF (Carriage Return + Line Feed) sequence\n- Standard for files created on Windows systems\n- Used by: Microsoft Office, Windows-based applications\n\n**Unix-Style (`\\n`)**\n```\n\"rowDelimiter\": \"\\n\"\n```\n- LF (Line Feed) character only\n- Standard for files created on Unix/Linux/macOS (modern) systems\n- Used by: Linux applications, macOS applications, web exports\n\n**Classic Mac-Style (`\\r`)**\n```\n\"rowDelimiter\": \"\\r\"\n```\n- CR (Carriage Return) character only\n- Legacy format used by older Mac systems (pre-OSX)\n- Rare in modern files but still found in some legacy systems\n\n**When to specify explicitly**\n\nIn most cases, the auto-detection works well, but explicitly set this when:\n\n1. **Mixed Line Endings**: Files containing inconsistent line ending styles\n2. **Custom Record Separators**: Files using unconventional record delimiters\n3. **Parsing Errors**: When auto-detection fails to correctly separate records\n4. **Performance Optimization**: To avoid detection overhead in high-volume processing\n\n**Determination strategy for ai agents**\n\n1. **Source System Analysis**:\n    - Windows systems typically use `\\r\\n`\n    - Unix/Linux/macOS typically use `\\n`\n    - Web downloads could use either format\n\n2. **Troubleshooting Guidance**:\n    - If records are merged or split incorrectly, check for proper row delimiter\n    - If file opens correctly in text editor but parsing fails, row delimiter may be the issue\n    - For files with unusual record counts, examine row delimiter setting\n\n**Implementation notes**\n\n- Use escape sequences (`\\n`, `\\r\\n`, `\\r`) to represent control characters\n- Setting incorrect row delimiter may result in merged records or split records\n- When in doubt, leave unspecified to use auto-detection\n- Multi-character delimiters beyond standard line endings are supported but rare\n"},"hasHeaderRow":{"type":"boolean","description":"Indicates whether the CSV file contains a header row with field names as the first row.\n\n**Behavior**\n\n- **When true** (default): First row is treated as field names, not data\n- **When false**: All rows including the first are treated as data records\n- Impacts field mapping, validation, and record counting\n\n**Implementation impact**\n\n**With Header Row (true)**\n\n- Field names from the header row can be referenced in mappings\n- Record count excludes the header row\n- First row of data is the second physical row in the file\n- Provides self-documenting data structure\n\n**Without Header Row (false)**\n\n- Fields are referenced by position/index (e.g., Column1, Column2)\n- Record count includes all rows in the file\n- First row of data is the first physical row in the file\n- Requires external schema or position-based mapping\n\n**Determination strategy for ai agents**\n\n1. **Visual Inspection**:\n    - Check if the first row contains descriptive labels rather than actual data\n    - Look for data type consistency (headers are typically text, while data may be mixed)\n    - Headers often use camelCase, PascalCase, or snake_case formatting\n\n2. **Source System Analysis**:\n    - Most business systems include headers by default\n    - Legacy/mainframe systems may omit headers\n    - Data extracts intended for human use typically include headers\n\n3. **Content Patterns**:\n    - Headers typically don't match the pattern of subsequent data rows\n    - Headers often contain special characters not found in data (spaces, symbols)\n    - Data rows typically have consistent patterns while headers may differ\n\n**Common configurations by source**\n\n| Source Type | Typical Setting | Notes |\n|-------------|-----------------|-------|\n| Business Reports | true | Headers provide field context |\n| Database Exports | true | Column names as headers |\n| Legacy System Feeds | false | Often position-based fixed formats |\n| IoT/Sensor Data | false | Often compact, headerless formats |\n| Manual Data Entry | true | Helps maintain field alignment |\n\n**Best practices**\n\n- Always explicitly set this value rather than relying on the default\n- For data without headers, consider adding them in preprocessing if possible\n- When headers exist but should be ignored, use `hasHeaderRow: true` and `rowsToSkip: 1`\n- Document field positions when working with headerless files\n"},"trimSpaces":{"type":"boolean","description":"Controls whether leading and trailing whitespace should be removed from field values during parsing.\n\n**Behavior**\n\n- **When true**: Removes all leading and trailing whitespace from each field value\n- **When false** (default): Preserves all whitespace in field values exactly as in the source\n- Applies to data fields only; header row values are always trimmed regardless of this setting\n\n**Implementation impact**\n\n**With Trimming Enabled (true)**\n\n- More consistent data for comparison and matching operations\n- Prevents issues with invisible whitespace affecting equality checks\n- Reduces storage space for text-heavy datasets\n- Helps normalize data from inconsistent sources\n\n**With Trimming Disabled (false)**\n\n- Preserves exact data as represented in the source file\n- Required when whitespace is semantically meaningful\n- Maintains original field lengths exactly\n- Necessary for certain data validation scenarios\n\n**Usage guidance for ai agents**\n\n**Recommend `trimSpaces: true` when**\n\n1. **Data Consistency Issues**:\n    - Source systems are known to have inconsistent spacing\n    - Data will be used for matching or comparison operations\n    - Files are generated by multiple different systems\n    - Human-entered data is present (prone to spacing errors)\n\n2. **Data Type Considerations**:\n    - Fields contain numeric values (where spaces are not meaningful)\n    - Fields contain codes, IDs, or reference values\n    - Fields will be used in lookups or joins\n    - Normalization is more important than exact representation\n\n**Recommend `trimSpaces: false` when**\n\n1. **Data Fidelity Requirements**:\n    - Working with fixed-width fields where spaces matter\n    - Dealing with formatted data where spacing is semantic\n    - Legal or compliance scenarios requiring exact preservation\n    - Scientific data where precision of representation matters\n\n2. **Content Characteristics**:\n    - Working with text fields where leading/trailing spaces could be intentional\n    - Processing creative content, addresses, or formatted text\n    - Source system uses space padding for alignment purposes\n\n**Implementation notes**\n\n- This setting affects all fields consistently (cannot be applied to select fields)\n- Only affects leading and trailing spaces, not spaces between words\n- Has no effect on empty fields (empty remains empty)\n- For selective trimming, use transformation rules after parsing\n"},"rowsToSkip":{"type":"integer","description":"Specifies the number of rows at the beginning of the file to ignore before starting data processing.\n\n**Behavior**\n\n- Skips the specified number of rows from the beginning of the file\n- These rows are completely ignored and not processed as data\n- The header row (if present) is counted after the skipped rows\n- Default value is 0 (no rows skipped)\n\n**Implementation impact**\n\n**Common Use Cases**\n\n1. **Metadata Headers**:\n    - Skip report titles, generated timestamps, system information\n    - Skip explanatory text at the beginning of files\n    - Skip company letterhead or report identification rows\n\n2. **Multi-Header Files**:\n    - Skip category headers or section titles\n    - Skip nested headers or hierarchy information\n    - Skip column grouping indicators\n\n3. **Technical Requirements**:\n    - Skip binary file markers or encoding identifiers\n    - Skip non-data content like instructions or disclaimers\n    - Skip inconsistent early rows before standardized data begins\n\n**Calculation guidance for ai agents**\n\nWhen determining the correct value for `rowsToSkip`:\n\n1. **Count from Zero**:\n    - Row 1 = 0, Row 2 = 1, Row 3 = 2, etc.\n\n2. **For Files with Headers**:\n    - Set rowsToSkip = (first data row position - 1) - (hasHeaderRow ? 1 : 0)\n    - Example: If data starts on row 5, and file has a header row:\n      rowsToSkip = (5 - 1) - 1 = 3\n\n3. **For Files without Headers**:\n    - Set rowsToSkip = (first data row position - 1)\n    - Example: If data starts on row 3, and file has no header row:\n      rowsToSkip = (3 - 1) = 2\n\n**Determination strategy**\n\n1. **Visual Inspection**:\n    - Open file in text editor and count non-data rows at the top\n    - Identify the first row containing actual data values\n    - Note if a header row exists separately from skipped content\n\n2. **Common Patterns by Source**:\n    - ERP Reports: Often 2-5 rows of report metadata\n    - Exported Spreadsheets: May have title rows, date stamps\n    - Database Extracts: Usually minimal (0-1) skipped rows\n    - Legacy Systems: May have control records or job information\n\n**Implementation notes**\n\n- Setting too high skips valid data; setting too low includes non-data as records\n- When in doubt, visually inspect the file to confirm correct skip count\n- Remember that header row (if hasHeaderRow=true) is counted AFTER skipped rows\n- Maximum recommended value: 100 (larger values may indicate format misunderstanding)\n"},"disableQuoteAndStripEnclosingQuotes":{"type":"boolean","description":"Controls the handling of quoted fields in CSV files, specifically how the parser manages quotation marks around field values.\n\n**Behavior**\n\n- **When false** (default): Standard CSV quoting rules are applied\n    - Quotation marks around fields protect embedded delimiters\n    - Parser intelligently handles escaped quotes within quoted fields\n    - Follows RFC 4180 CSV specifications for quote handling\n\n- **When true**: Quote detection and processing is disabled\n    - All quotes are treated as literal characters, not field delimiters\n    - Any quotes surrounding field values are removed\n    - Embedded delimiters in quoted fields will cause field splitting\n\n**Implementation impact**\n\n**Standard Quote Handling (false)**\n\nExample input: `\"Smith, John\",42,\"Notes with \"\"quotes\"\" inside\"`\n\nResult:\n- Field 1: `Smith, John` (comma preserved inside quotes)\n- Field 2: `42`\n- Field 3: `Notes with \"quotes\" inside` (embedded quotes normalized)\n\n**Disabled Quote Handling (true)**\n\nExample input: `\"Smith, John\",42,\"Notes with \"\"quotes\"\" inside\"`\n\nResult:\n- Field 1: `\"Smith`\n- Field 2: ` John\"`\n- Field 3: `42`\n- Field 4: `\"Notes with \"\"quotes\"\" inside\"`\n\n**Usage guidance for ai agents**\n\n**Recommend `disableQuoteAndStripEnclosingQuotes: true` when**\n\n1. **Quote-Related Parsing Problems**:\n    - Files contain inconsistent or malformed quote usage\n    - Source system doesn't follow standard CSV quoting rules\n    - Quotes appear as literal data rather than field delimiters\n    - Quotes are present but delimiters are never embedded in fields\n\n2. **Special Data Formats**:\n    - Working with custom delimited formats that don't use quotes for escaping\n    - Files use alternate escaping mechanisms for embedded delimiters\n    - Source system adds quotes to all fields regardless of content\n\n**Recommend `disableQuoteAndStripEnclosingQuotes: false` (default) when**\n\n1. **Standard CSV Compliance**:\n    - Files follow RFC 4180 or similar CSV standards\n    - Fields contain embedded delimiters that must be preserved\n    - Quotes are used properly to enclose fields with special characters\n    - Source is a standard database, spreadsheet, or business system export\n\n2. **Data Content Characteristics**:\n    - Fields contain embedded commas, newlines, or other delimiters\n    - Text fields might contain quotation marks as part of the content\n    - Preserving the exact structure of complex text fields is important\n\n**Troubleshooting indicators**\n\nConsider changing this setting when encountering these issues:\n\n- Field counts vary unexpectedly between rows\n- Text with embedded delimiters is being split into multiple fields\n- Quotes appearing at the beginning and end of every field in the result\n- Extra quote characters appearing within field values\n\n**Implementation notes**\n\n- This setting significantly changes parsing behavior - test thoroughly\n- Affects all fields in the file consistently\n- Incorrect setting can cause severe data misalignment\n- When field count inconsistency occurs, review this setting first\n"}}},"json":{"type":"object","description":"Configuration settings for parsing JSON (JavaScript Object Notation) files. This object defines how the system interprets and processes hierarchical data contained in JSON-formatted files.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"json\". This configuration is required for properly parsing:\n- Standard JSON files (.json)\n- JSON data exports from APIs or databases\n- JSON Lines format (newline-delimited JSON)\n- Nested or hierarchical data structures\n\n**Json parsing characteristics**\n\n- **Hierarchical Data**: JSON naturally supports nested objects and arrays\n- **Type Preservation**: Numbers, booleans, nulls, and strings are correctly typed\n- **Flexible Structure**: Can handle varying record structures\n- **Tree Navigation**: Supports complex object traversal via path expressions\n\n**Implementation strategy for ai agents**\n\n1. **Data Structure Analysis**:\n    - Examine sample files to understand the object hierarchy\n    - Identify where the actual records/rows are located in the structure\n    - Determine if records are at the root or nested within containers\n    - Check for array structures that contain the target records\n\n2. **Common JSON Data Patterns**:\n\n    **Root Array Pattern**\n    ```json\n    [\n      {\"id\": 1, \"name\": \"Product 1\"},\n      {\"id\": 2, \"name\": \"Product 2\"}\n    ]\n    ```\n    - Records are directly at the root as an array\n    - No resourcePath needed (leave blank)\n    - Most straightforward structure for processing\n\n    **Container Object Pattern**\n    ```json\n    {\n      \"data\": [\n        {\"id\": 1, \"name\": \"Product 1\"},\n        {\"id\": 2, \"name\": \"Product 2\"}\n      ],\n      \"metadata\": {\n        \"count\": 2,\n        \"page\": 1\n      }\n    }\n    ```\n    - Records are in an array inside a container object\n    - Requires resourcePath (e.g., \"data\")\n    - Common in API responses with metadata\n\n    **Nested Container Pattern**\n    ```json\n    {\n      \"response\": {\n        \"results\": [\n          {\"id\": 1, \"name\": \"Product 1\"},\n          {\"id\": 2, \"name\": \"Product 2\"}\n        ],\n        \"pagination\": {\n          \"nextPage\": 2\n        }\n      },\n      \"status\": \"success\"\n    }\n    ```\n    - Records are deeply nested in the hierarchy\n    - Requires dot notation in resourcePath (e.g., \"response.results\")\n    - Common in complex API responses\n\n**Error prevention**\n\n- **Invalid Path**: Incorrectly specified resourcePath results in zero records found\n- **Type Mismatch**: resourcePath must point to an array of objects for proper record processing\n- **Empty Results**: If path resolves to null or non-existent field, no error is thrown but no records are processed\n- **Parsing Failures**: Malformed JSON will cause the entire file processing to fail\n\n**Optimization opportunities**\n\n- For large JSON files, consider preprocessing to extract only relevant sections\n- For files with complex structures, validate the resourcePath with sample data\n- When processing API responses, coordinate resourcePath with the API documentation\n- For very large datasets, consider using streaming JSON parsing (NDJSON format)\n","properties":{"resourcePath":{"type":"string","description":"Specifies the path to the array of records within the JSON structure. This field helps the system locate and extract the target records when they're nested inside a larger JSON object hierarchy.\n\n**Behavior**\n\n- **Purpose**: Identifies where the array of records is located in the JSON structure\n- **Format**: Dot notation path to navigate nested objects (e.g., \"data.records\")\n- **When Empty**: System expects records to be at the root level as an array\n- **Result**: Array found at this path is processed as individual records\n\n**Path notation guidelines**\n\n**Basic Path Patterns**\n\n- **Root Level Array**: Leave empty or null when records are a direct array at root\n- **Single Level Nesting**: Use the property name (e.g., \"data\", \"results\", \"items\")\n- **Multi-Level Nesting**: Use dot notation (e.g., \"response.data.items\")\n\n**Path Construction Rules**\n\n1. **Object Navigation**:\n    - Use dots to traverse object properties: \"parent.child.grandchild\"\n    - Each segment must be a valid property name in the JSON\n\n2. **Target Requirements**:\n    - The path MUST resolve to an array of objects\n    - Each object in the array will be processed as one record\n    - The array must be the final element in the path\n\n3. **Limitations**:\n    - Array indexing is not supported in the path (e.g., \"data[0]\")\n    - Wildcard selectors are not supported\n    - Regular expressions are not supported\n\n**Determination strategy for ai agents**\n\nTo identify the correct resourcePath:\n\n1. **Examine Sample Data**:\n    - Open a sample JSON file or API response\n    - Locate the array containing the actual data records\n    - Note the full path from root to this array\n\n2. **Common Patterns by Source**:\n\n    | Source Type | Common Paths | Example |\n    |-------------|--------------|---------|\n    | REST APIs | \"data\", \"results\", \"items\" | \"data\" |\n    | Complex APIs | \"response.data\", \"data.items\" | \"response.data\" |\n    | Database Exports | \"rows\", \"records\", \"exports\" | \"rows\" |\n    | CRM Systems | \"contacts\", \"accounts\", \"opportunities\" | \"contacts\" |\n    | Analytics APIs | \"data.rows\", \"response.data.rows\" | \"data.rows\" |\n\n3. **Verification Approach**:\n    - The path should resolve to an array (typically with square brackets in the JSON)\n    - Each element in this array should represent one complete record\n    - The array should not be a property array (like tags or categories)\n\n**Implementation examples**\n\n**Root Array (No Path Needed)**\n\nJSON Structure:\n```json\n[\n  {\"id\": 1, \"name\": \"Record 1\"},\n  {\"id\": 2, \"name\": \"Record 2\"}\n]\n```\n\nConfiguration:\n```json\n\"resourcePath\": \"\"  // or omit entirely\n```\n\n**Single-Level Nesting**\n\nJSON Structure:\n```json\n{\n  \"orders\": [\n    {\"id\": \"A001\", \"customer\": \"John\"},\n    {\"id\": \"A002\", \"customer\": \"Jane\"}\n  ],\n  \"count\": 2\n}\n```\n\nConfiguration:\n```json\n\"resourcePath\": \"orders\"\n```\n\n**Multi-Level Nesting**\n\nJSON Structure:\n```json\n{\n  \"response\": {\n    \"data\": {\n      \"customers\": [\n        {\"id\": 1, \"name\": \"Acme Corp\"},\n        {\"id\": 2, \"name\": \"Globex Inc\"}\n      ]\n    },\n    \"status\": \"success\"\n  }\n}\n```\n\nConfiguration:\n```json\n\"resourcePath\": \"response.data.customers\"\n```\n\n**Troubleshooting indicators**\n\nIf you encounter these issues, review the resourcePath setting:\n\n- Export completes successfully but processes 0 records\n- \"Cannot read property 'forEach' of undefined\" errors\n- \"Expected array but got object/string/number\" errors\n- Records appear flattened or with unexpected structure\n\n**Best practices**\n\n- Always verify the path with sample data before deployment\n- Use the simplest path that reaches the target array\n- Document the expected JSON structure alongside the configuration\n- For APIs with changing response structures, implement validation checks\n"}}},"xlsx":{"type":"object","description":"Configuration settings for parsing Microsoft Excel (XLSX) files. This object defines how the system interprets and extracts data from Excel workbooks, handling their unique structures and formatting.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"xlsx\". This configuration is required for properly parsing:\n- Modern Excel files (.xlsx) using the Open XML format\n- Excel workbooks with multiple sheets\n- Files exported from Microsoft Excel or compatible applications\n- Spreadsheet data with formatting, formulas, or multiple worksheets\n\n**Excel parsing characteristics**\n\n- **Multiple Worksheets**: Can access data from specific sheets within workbooks\n- **Cell Formatting**: Handles various data types (text, numbers, dates, etc.)\n- **Formula Resolution**: Retrieves calculated values rather than formulas\n- **Data Extraction**: Converts tabular Excel data to structured records\n\n**Implementation strategy for ai agents**\n\n1. **File Analysis**:\n    - Determine if the source file is an actual .xlsx format (not .xls, .csv, etc.)\n    - Identify which worksheet contains the target data\n    - Check for header rows, merged cells, or other special formatting\n    - Note any preprocessing required (hidden rows, filtered data, etc.)\n\n2. **Common Excel File Patterns**:\n\n    **Standard Data Table**\n    - Data organized in clear rows and columns\n    - First row contains headers\n    - No merged cells or complex formatting\n    - Most straightforward to process\n\n    **Report-Style Workbook**\n    - Contains titles, headers, and possibly footers\n    - May have merged cells for headings\n    - Could have multiple tables on a single sheet\n    - May require specific sheet selection or row skipping\n\n    **Multi-Sheet Workbook**\n    - Data distributed across multiple worksheets\n    - May require multiple export configurations\n    - Often needs sheet name specification (via pre-processing)\n    - Common in financial or complex business reports\n\n**Limitations and considerations**\n\n- **Hidden Data**: Hidden rows/columns are still processed unless filtered\n- **Formatting Loss**: Visual formatting and styles are ignored\n- **Formula Handling**: Only calculated values are extracted, not formulas\n- **Non-Tabular Data**: Pivot tables and non-tabular layouts may cause issues\n- **Large Files**: Very large Excel files may require additional memory\n\n**Error prevention**\n\n- **Format Compatibility**: Ensure the file is modern .xlsx format, not legacy .xls\n- **Data Structure**: Verify data is in a consistent tabular format\n- **Special Characters**: Watch for special characters in header rows\n- **Empty Sheets**: Check that target worksheets contain actual data\n\n**Optimization opportunities**\n\n- For complex workbooks, consider pre-processing to simplify structure\n- For large files, extract only necessary worksheets/ranges before processing\n- When possible, use files with consistent tabular layouts\n- Consider converting Excel data to CSV format for simpler processing\n","properties":{"hasHeaderRow":{"type":"boolean","description":"Indicates whether the Excel file contains a header row with field names as the first row.\n\n**Behavior**\n\n- **When true** (default): First row is treated as field names, not data\n- **When false**: All rows including the first are treated as data records\n- Impacts field mapping, validation, and record counting\n\n**Implementation impact**\n\n**With Header Row (true)**\n\n- Field names from the header row can be referenced in mappings\n- Record count excludes the header row\n- First row of data is the second physical row in the spreadsheet\n- Column names are derived from the first row text values\n- Blank header cells may be auto-named (Column1, Column2, etc.)\n\n**Without Header Row (false)**\n\n- Fields are referenced by position/Excel column letters (A, B, C, etc.)\n- Record count includes all rows in the sheet\n- First row of data is the first physical row in the spreadsheet\n- Requires external schema or position-based mapping\n- All fields are given generic names (Column1, Column2, etc.)\n\n**Determination strategy for ai agents**\n\nTo determine if a header row exists and should be configured:\n\n1. **Visual Inspection**:\n    - Open the Excel file and examine the first row\n    - Look for descriptive labels rather than actual data values\n    - Check for formatting differences between the first row and others\n    - Header rows often use bold formatting or different background colors\n\n2. **Content Analysis**:\n    - Headers typically contain text while data rows may contain mixed types\n    - Headers often use naming conventions (camelCase, Title Case, etc.)\n    - Headers don't follow the pattern/format of subsequent data rows\n    - Headers rarely contain numeric-only values (unless they're codes)\n\n3. **Source Context**:\n    - Business reports almost always include headers\n    - Data exports from systems typically include column names\n    - Machine-generated data might skip headers\n    - Scientific or technical data sometimes omits headers\n\n**Usage guidance for ai agents**\n\n**Recommend `hasHeaderRow: true` when**\n\n1. **Standard Business Data**:\n    - Most business Excel files include headers\n    - Reports and exports from business systems\n    - Files intended for human readability\n    - When column names provide important context\n\n2. **Integration Requirements**:\n    - When field names are needed for mapping\n    - When data needs to be self-describing\n    - When header names match target system fields\n    - For maintaining field identity across systems\n\n**Recommend `hasHeaderRow: false` when**\n\n1. **Special Data Types**:\n    - Scientific or sensor data without labels\n    - Machine-generated output files\n    - Legacy system exports with position-based fields\n    - When all rows contain actual data values\n\n2. **Technical Scenarios**:\n    - When the first row contains required data\n    - When column positions are used for mapping\n    - When headers are inconsistent or misleading\n    - For maximum data extraction with minimal configuration\n\n**Implementation notes**\n\n- This setting affects all worksheets in multi-sheet processing\n- Excel column names with spaces or special characters may be normalized\n- Duplicate header names will be made unique with suffixes\n- Empty header cells will get automatically generated names\n- Maximum recommended header length: 64 characters\n- Consider pre-processing files without headers to add them for clarity\n"}}},"xml":{"type":"object","description":"Configuration settings for parsing XML (Extensible Markup Language) files. This object defines how the system navigates and extracts hierarchical data from XML documents, enabling processing of structured markup data.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"xml\". This configuration is required for properly parsing:\n- Standard XML files (.xml)\n- SOAP API responses and web service outputs\n- Industry-specific XML formats (EDI, NIEM, UBL, etc.)\n- Document-oriented data with hierarchical structure\n\n**Xml parsing characteristics**\n\n- **Hierarchical Structure**: Processes nested elements and attributes\n- **Schema Independence**: Works with or without formal XML schemas\n- **Node Selection**: Uses XPath to precisely target record elements\n- **Namespace Support**: Handles XML namespaces in complex documents\n\n**Implementation strategy for ai agents**\n\n1. **Document Analysis**:\n    - Examine the XML structure to identify repeating elements (records)\n    - Determine the hierarchical level where target records exist\n    - Identify any namespaces that must be addressed\n    - Note attributes vs. element content patterns\n\n2. **Common XML Data Patterns**:\n\n    **Simple Element List**\n    ```xml\n    <Records>\n      <Record id=\"1\">\n        <Name>Product 1</Name>\n        <Price>10.99</Price>\n      </Record>\n      <Record id=\"2\">\n        <Name>Product 2</Name>\n        <Price>20.99</Price>\n      </Record>\n    </Records>\n    ```\n    - Records are identical element types with similar structure\n    - Direct children of a container element\n    - XPath: `/Records/Record`\n\n    **Namespaced xml**\n    ```xml\n    <soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\">\n      <soap:Body>\n        <ns:GetCustomersResponse xmlns:ns=\"http://example.com/api\">\n          <ns:Customer id=\"1\">\n            <ns:Name>Acme Corp</ns:Name>\n          </ns:Customer>\n          <ns:Customer id=\"2\">\n            <ns:Name>Globex Inc</ns:Name>\n          </ns:Customer>\n        </ns:GetCustomersResponse>\n      </soap:Body>\n    </soap:Envelope>\n    ```\n    - Elements use XML namespaces\n    - Records are nested within service response structures\n    - XPath: `//ns:Customer` or `/soap:Envelope/soap:Body/ns:GetCustomersResponse/ns:Customer`\n\n    **Heterogeneous Records**\n    ```xml\n    <Feed>\n      <Entry type=\"product\">\n        <ProductId>123</ProductId>\n        <Name>Widget</Name>\n      </Entry>\n      <Entry type=\"category\">\n        <CategoryId>A5</CategoryId>\n        <Label>Supplies</Label>\n      </Entry>\n    </Feed>\n    ```\n    - Same element type may have different internal structures\n    - Usually identified by an attribute or child element type\n    - May require multiple export configurations\n    - XPath: `/Feed/Entry[@type=\"product\"]`\n\n**Xpath query formulation**\n\nXPath is a powerful language for selecting nodes in XML documents. When formulating a resourcePath:\n\n- **Absolute Paths** (starting with `/`): Select from the document root\n- **Relative Paths** (no leading `/`): Select from the current context\n- **Any-Level Selection** (`//`): Select matching nodes regardless of location\n- **Predicates** (`[]`): Filter elements based on attributes or content\n- **Attribute Selection** (`@`): Select attribute values instead of elements\n\n**Error prevention**\n\n- **Invalid XPath**: Test the resourcePath against sample data before deployment\n- **Namespace Issues**: Ensure proper namespace handling in complex documents\n- **Empty Results**: Verify that the XPath selects the intended nodes and not an empty set\n- **Encoding Problems**: Use the correct encoding setting for international content\n\n**Optimization opportunities**\n\n- For large XML files, use more specific XPaths to reduce processing overhead\n- For complex structures, consider preprocessing to simplify before parsing\n- For SOAP responses, extract just the response body before processing\n- For repeating integration, document the exact XPath with examples\n","properties":{"resourcePath":{"type":"string","description":"Specifies the XPath expression used to locate record elements within the XML document. This critical field determines which XML nodes are treated as individual records for processing.\n\n**Behavior**\n\n- **Purpose**: Identifies which elements in the XML represent individual records\n- **Format**: Uses XPath syntax to select nodes from the document structure\n- **Requirement**: MANDATORY for XML processing - no default value exists\n- **Result**: Each XML element matching the XPath is processed as one record\n\n**Xpath syntax guidance**\n\n**Core XPath Patterns**\n\n1. **Direct Child Selection** (`/Root/Element`):\n    ```xml\n    <Root>\n      <Element>Record 1</Element>\n      <Element>Record 2</Element>\n    </Root>\n    ```\n    - XPath: `/Root/Element`\n    - Selects elements that are direct children following exact path\n    - Most precise, requires exact hierarchy knowledge\n    - Recommended when structure is consistent and well-known\n\n2. **Any-Level Selection** (`//Element`):\n    ```xml\n    <Root>\n      <Section>\n        <Element>Record 1</Element>\n      </Section>\n      <Container>\n        <Element>Record 2</Element>\n      </Container>\n    </Root>\n    ```\n    - XPath: `//Element`\n    - Selects all matching elements regardless of location\n    - More flexible, works across varying structures\n    - Use when element hierarchy may vary or is unknown\n\n3. **Filtered Selection** (`//Element[@attr=\"value\"]`):\n    ```xml\n    <Root>\n      <Element type=\"product\">Record 1</Element>\n      <Element type=\"category\">Not a record</Element>\n      <Element type=\"product\">Record 2</Element>\n    </Root>\n    ```\n    - XPath: `//Element[@type=\"product\"]`\n    - Selects only elements matching both name and attribute criteria\n    - Precise targeting when elements have identifying attributes\n    - Useful for heterogeneous XML with type indicators\n\n**Advanced Selection Techniques**\n\n1. **Position-Based** (`/Root/Element[1]`):\n    - Selects first element only\n    - Use when only certain occurrences should be processed\n\n2. **Content-Based** (`//Element[contains(text(),\"Value\")]`):\n    - Selects elements containing specific text\n    - Useful for filtering based on content\n\n3. **Parent-Relative** (`//Parent[Child=\"Value\"]/Element`):\n    - Selects elements with specific sibling or parent conditions\n    - Powerful for complex structural conditions\n\n**Namespace handling**\n\nWhen working with namespaced XML:\n\n```xml\n<soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"\n              xmlns:ns=\"http://example.com/api\">\n  <soap:Body>\n    <ns:Response>\n      <ns:Customer>Record 1</ns:Customer>\n      <ns:Customer>Record 2</ns:Customer>\n    </ns:Response>\n  </soap:Body>\n</soap:Envelope>\n```\n\nThe system automatically handles namespaces, but for clarity and precision:\n\n1. **Namespace-Aware Path**:\n    - XPath: `/soap:Envelope/soap:Body/ns:Response/ns:Customer`\n    - Include namespace prefixes as they appear in the document\n\n2. **Namespace-Agnostic Path**:\n    - XPath: `//Customer` or `//*[local-name()=\"Customer\"]`\n    - Use when you want to ignore namespaces entirely\n\n**Determination strategy for ai agents**\n\n1. **Identify Record Elements**:\n    - Look for repeating elements that represent individual \"rows\" of data\n    - These elements typically have the same name and similar structure\n    - They often contain multiple child elements representing \"fields\"\n\n2. **Analyze Element Hierarchy**:\n    - Note the path from root to record elements\n    - Determine if records appear at consistent locations or vary\n    - Check if they need to be filtered by attributes or position\n\n3. **Test Path Specificity**:\n    - More specific paths reduce processing overhead but are less flexible\n    - More general paths (with `//`) are robust to structure changes but less efficient\n    - Balance specificity with flexibility based on source stability\n\n**Common xpath patterns by source**\n\n| Source Type | Common XPath Pattern | Example |\n|-------------|----------------------|---------|\n| SOAP APIs | `/Envelope/Body/*/Response/*` | `/soap:Envelope/soap:Body/ns:GetOrdersResponse/ns:Order` |\n| REST XML | `/Response/Results/*` | `/ApiResponse/Results/Customer` |\n| Feeds | `/Feed/Entry` or `/Feed/Item` | `/rss/channel/item` |\n| Documents | `//Section/Item` | `//Chapter/Paragraph` |\n| EDI/Business | `/Document/Transaction/Line` | `/Invoice/LineItems/Item` |\n\n**Troubleshooting indicators**\n\nIf you encounter these issues, review the resourcePath:\n\n- Export completes successfully but processes 0 records\n- Records contain unexpected or partial data\n- Only first level of data is extracted (missing nested content)\n- Namespace-related \"element not found\" errors\n\n**Implementation notes**\n\n- XPath is case-sensitive; element and attribute names must match exactly\n- Each matching element becomes a separate record for processing\n- Child elements become fields in the processed record\n- Attributes can be included in field data if needed\n- Namespaces are handled automatically but may require explicit prefixes\n- Testing with an XPath tool on sample data is highly recommended\n"}}},"fileDefinition":{"type":"object","description":"Configuration settings for parsing files using a predefined file definition. This object enables processing of complex, non-standard, or proprietary file formats that require specialized parsing logic beyond what the standard parsers (CSV, JSON, XML, etc.) can handle.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"filedefinition\". This approach is required for properly handling:\n- Legacy or proprietary file formats with complex structures\n- Fixed-width text files where field positions are defined by character positions\n- Electronic Data Interchange (EDI) documents (X12, EDIFACT, etc.)\n- Multi-record type files where different lines have different formats\n- Files requiring complex preprocessing or custom parsing logic\n\n**File definition characteristics**\n\n- **Custom Parsing Rules**: Applies predefined parsing logic to complex file formats\n- **Reusable Configurations**: References externally defined parsing rules that can be reused\n- **Complex Format Support**: Handles formats that standard parsers cannot process\n- **Specialized Processing**: Often used for industry-specific or legacy formats\n\n**Implementation strategy for ai agents**\n\n1. **Format Analysis**:\n    - Determine if the file format is standard (CSV, JSON, XML) or requires custom parsing\n    - Check if the format follows industry standards like EDI, SWIFT, or fixed-width\n    - Assess if there are multiple record types within the same file\n    - Identify if specialized logic is needed to interpret the file structure\n\n2. **File Definition Selection**:\n    - Verify that a suitable file definition has already been created in the system\n    - Check if existing file definitions match the format requirements\n    - Confirm the file definition ID from system administrators if needed\n    - Ensure the file definition is compatible with the export's needs\n\n**Use case scenarios**\n\n**Fixed-Width Files**\n\nFiles where each field has a specific starting position and length:\n```\nCUST00001JOHN     DOE       123 MAIN ST\nCUST00002JANE     SMITH     456 OAK AVE\n```\n- Fields are positioned by character count rather than delimiters\n- Requires precise position and length definitions\n- Common in legacy mainframe and banking systems\n\n**Edi Documents**\n\nElectronic Data Interchange formats for business transactions:\n```\nISA*00*          *00*          *ZZ*SENDER         *ZZ*RECEIVER       *...\nGS*PO*SENDER*RECEIVER*20210101*1200*1*X*004010\nST*850*0001\nBEG*00*SA*123456**20210101\n...\n```\n- Highly structured with segment identifiers and element separators\n- Contains multiple record types with different structures\n- Requires complex parsing rules and validation\n\n**Multi-Record Files**\n\nFiles containing different record types identified by indicators:\n```\nH|SHIPMENT|20210115|PRIORITY\nD|ITEM001|5|WIDGET|RED\nD|ITEM002|10|GADGET|BLUE\nT|2|15|COMPLETE\n```\n- Each line starts with a record type indicator\n- Different record types have different field structures\n- Requires conditional processing based on record type\n\n**Error prevention**\n\n- **Definition Mismatch**: Ensure the file definition matches the actual file format\n- **Missing Definition**: Verify the file definition exists before referencing it\n- **Access Issues**: Confirm the integration has permission to use the file definition\n- **Version Compatibility**: Check if file definition version matches current file format\n\n**Optimization opportunities**\n\n- Document which file definition is used and why it's appropriate for the file format\n- Consider creating purpose-specific file definitions for complex formats\n- Test file definitions with sample files before deploying in production\n- Maintain documentation of the file structure alongside the file definition reference\n","properties":{"_fileDefinitionId":{"type":"string","format":"objectId","description":"The unique identifier of the file definition to use for parsing the file. This ID references a preconfigured file definition resource that contains the detailed parsing instructions for a specific file format.\n\n**Field behavior**\n\n- **Purpose**: References an existing file definition resource in the system\n- **Format**: MongoDB ObjectId (24-character hexadecimal string)\n- **Requirement**: MANDATORY when type=\"filedefinition\"\n- **Validation**: Must reference a valid, accessible file definition\n\n**Understanding file definitions**\n\nA file definition is a separate resource that defines:\n\n1. **Record Structure**:\n    - Field names, positions, and data types\n    - Record identifiers and format specifications\n    - Parsing rules and field extraction logic\n\n2. **Processing Rules**:\n    - How to identify different record types\n    - How to handle headers, footers, and details\n    - Data validation and transformation rules\n\n3. **Format-Specific Settings**:\n    - For fixed-width: Character positions and field lengths\n    - For EDI: Segment identifiers and element separators\n    - For proprietary formats: Custom parsing instructions\n\n**Obtaining the correct id**\n\nTo identify the appropriate file definition ID:\n\n1. **System Administration**:\n    - Check with system administrators for a list of available file definitions\n    - Request the specific ID for the file format you need to process\n    - Verify the file definition's compatibility with your file format\n\n2. **File Definition Catalog**:\n    - If available, consult the file definition catalog in the system\n    - Search for definitions matching your file format requirements\n    - Note the ObjectId of the appropriate definition\n\n3. **Custom Definition Creation**:\n    - If no suitable definition exists, request creation of a new one\n    - Provide sample files and format specifications\n    - Obtain the new file definition's ID after creation\n\n**Implementation guidance for ai agents**\n\n**Recommendation Framework**\n\nWhen implementing a file definition-based export:\n\n1. **Verify Definition Existence**:\n    - Confirm the file definition exists before configuration\n    - Do not guess or generate random IDs\n    - Request specific ID from system administrators\n\n2. **Documentation Requirements**:\n    - Document which file definition is being used and why\n    - Note any specific requirements or limitations of the definition\n    - Record the mapping between file fields and integration needs\n\n3. **Testing Approach**:\n    - Recommend testing with sample files before production use\n    - Verify all required fields are correctly extracted\n    - Validate the parsing results meet integration requirements\n\n**Common File Definition Categories**\n\n| Category | Description | Example Formats |\n|----------|-------------|----------------|\n| Fixed-Width | Fields defined by character positions | Banking transactions, government reports |\n| EDI | Electronic Data Interchange standards | X12, EDIFACT, TRADACOMS |\n| Hierarchical | Complex parent-child structures | Specialized industry formats |\n| Multi-Record | Different record types in one file | Inventory systems, financial exports |\n| Proprietary | Custom or legacy system formats | Mainframe exports, specialized software |\n\n**Technical considerations**\n\n- File definitions are reusable across multiple exports\n- Changes to a file definition affect all exports using it\n- File definitions may have version dependencies\n- Some file definitions may require specific pre-processing settings\n- Performance impact varies based on definition complexity\n\n**Troubleshooting indicators**\n\nIf you encounter these issues, verify the file definition ID:\n\n- \"File definition not found\" errors\n- Unexpected field mapping or missing fields\n- Data type conversion errors\n- Parsing failures with specific record types\n\nAlways document the exact file definition ID with its purpose to facilitate troubleshooting and maintenance.\n"}}},"filter":{"allOf":[{"description":"Configuration for selectively processing files based on specified criteria. This object enables precise\ncontrol over which files are included or excluded from the export operation.\n\n**Filter behavior**\n\nWhen configured, the filter is applied before file processing begins:\n- Files that match the filter criteria are processed\n- Files that don't match are completely skipped\n- No partial file processing is performed\n\n**Available filter fields**\n\nThe specific fields available for file filtering are the contained in the `fileMeta` property.\n\n**Common Filter Fields**\n\nThese are the most commonly available fields across most file providers:\n\n1. **filename**: The name of the file (with extension)\n  - Example filter: Match files with specific extensions or naming patterns\n  - Usage: `[\"endswith\", [\"extract\", \"filename\"], \".csv\"]`\n\n2. **filesize**: The size of the file in bytes\n  - Example filter: Skip files that are too large or too small\n  - Usage: `[\"lessthan\", [\"number\", [\"extract\", \"filesize\"]], 1000000]`\n\n3. **lastmodified**: The last modification timestamp of the file\n  - Example filter: Process only files created/modified within a specific date range\n  - Usage: `[\"greaterthan\", [\"extract\", \"lastmodified\"], \"2023-01-01T00:00:00Z\"]`\n"},{"$ref":"#/components/schemas/Filter"}]},"backupPath":{"type":"string","description":"The file system path where backup files will be stored before processing. This path specifies a directory location where the system will create backup copies of files before they are processed by the export flow.\n\n**Backup mechanism overview**\n\nThe backup mechanism creates a copy of source files in the specified location before processing begins. This provides:\n\n- **Data Safety**: Preserves original files in case of processing errors\n- **Audit Trail**: Maintains historical record of exported data\n- **Recovery Option**: Enables reprocessing from original files if needed\n- **Compliance Support**: Helps meet data retention requirements\n\n**Path configuration guidelines**\n\nThe path format must follow these conventions:\n\n- **Absolute Paths**: Must start with \"/\" (Unix/Linux) or include drive letter (Windows)\n- **Relative Paths**: Interpreted relative to the application's working directory\n- **Network Paths**: Can use UNC format (\\\\server\\share\\path) or mounted network drives\n- **Access Requirements**: The path must be writable by the service account running the integration\n\n**Implementation strategy for ai agents**\n\nWhen configuring the backup path, consider these factors:\n\n1. **Storage Capacity Planning**:\n    - Estimate average file sizes and volumes\n    - Calculate required storage based on retention period\n    - Implement monitoring for storage utilization\n    - Plan for storage growth based on business projections\n\n2. **Path Selection Criteria**:\n    - Choose locations with sufficient disk space\n    - Ensure appropriate read/write permissions\n    - Select paths with reliable access (avoid temporary or volatile storage)\n    - Consider network latency for remote locations\n\n3. **Backup Naming Convention**:\n    - Default: Original filename with timestamp suffix\n    - Custom: Can be controlled through integration settings\n    - Avoid paths that may contain special characters that need escaping\n    - Consider filename length limitations of target filesystem\n\n4. **Security Considerations**:\n    - Restrict access to backup location to authorized personnel only\n    - Avoid public-facing directories\n    - Consider encryption for sensitive data backups\n    - Implement appropriate file permissions\n\n**Backup strategy recommendations**\n\n| Data Sensitivity | Recommended Approach | Path Considerations |\n|------------------|----------------------|---------------------|\n| Low | Local directory backup | Fast access, limited protection |\n| Medium | Network share with permissions | Balanced access/protection |\n| High | Secure storage with encryption | Highest protection, potential performance impact |\n| Regulated | Compliant storage with audit trail | Must meet specific regulatory requirements |\n\n**Integration patterns**\n\n**Temporary Processing Pattern**\n\nFor short-term processing needs:\n```\n/tmp/exports/backups\n```\n- Files stored temporarily during processing\n- Limited retention period\n- Optimized for processing speed\n- May be automatically cleaned up\n\n**Long-term Archival Pattern**\n\nFor regulatory or business retention requirements:\n```\n/archive/exports/2023/Q4\n```\n- Organized by time period\n- Structured for easy retrieval\n- May include additional metadata\n- Designed for long-term storage\n\n**Cloud Storage Pattern**\n\nFor scalable, managed storage:\n```\n/mnt/cloud/exports/client123\n```\n- Mounted cloud storage location\n- Potentially unlimited capacity\n- May include built-in versioning\n- Often includes automatic replication\n\n**Error handling guidance**\n\nWhen configuring backup paths, anticipate these common issues:\n\n- **Permission Denied**: Ensure service account has write access\n- **Path Not Found**: Verify directory exists or create it programmatically\n- **Disk Full**: Monitor storage capacity and implement alerts\n- **Path Too Long**: Be aware of filesystem path length limitations\n\n**Technical considerations**\n\n- Backup operations may impact performance for large files\n- Network paths may introduce latency and availability concerns\n- Some filesystems have case sensitivity differences (important for path matching)\n- Path separators vary by platform (/ vs \\)\n- Special characters in paths may require escaping in certain contexts\n- Consider implementing automatic cleanup policies for backups\n\n**System administration notes**\n\n- Backup paths should be included in system backup procedures\n- Monitor space utilization on backup volumes\n- Implement appropriate retention policies\n- Document backup path locations in system configuration\n- Consider periodic validation of backup file integrity\n"}}},"Filter":{"type":"object","description":"Configuration for selectively processing records based on specified criteria. This object enables\nprecise control over which items are included or excluded from processing operations.\n\n**Filter behavior**\n\nWhen configured, the filter is applied before processing begins:\n- Items that match the filter criteria are processed\n- Items that don't match are completely skipped\n- No partial processing is performed\n\n**Implementation approaches**\n\nThere are two distinct filtering mechanisms available:\n\n**Rule-Based Filtering (`type: \"expression\"`)**\n- **Best For**: Common filtering patterns based on standard attributes\n- **Capabilities**: Filter by names, values, dates, numerical ranges, text patterns\n- **Advantages**: Declarative, no coding required, consistent performance\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear, static criteria for selection\n\n**Script-Based Filtering (`type: \"script\"`)**\n- **Best For**: Complex logic, dynamic criteria, or business rules\n- **Capabilities**: Full programmatic control, access to complete metadata\n- **Advantages**: Maximum flexibility, can implement any filtering logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Simple rules aren't sufficient or logic needs to be dynamic\n","properties":{"type":{"type":"string","description":"Determines which filtering mechanism to use. This choice affects which properties\nmust be configured and how filtering logic is implemented.\n\n**Available types**\n\n**Rule-Based Filtering (`\"expression\"`)**\n- **Required Config**: The `expression` object with rule definitions\n- **Behavior**: Evaluates declarative rules against item attributes\n- **Best For**: Common patterns like name matching, date ranges, value limits\n- **Advantages**: Simpler to configure, no custom code required\n\n**Script-Based Filtering (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to determine which items to process\n- **Best For**: Complex conditions, business logic, dynamic criteria\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard filtering needs (name, size, date), use `\"expression\"`\n2. For complex logic or conditions not covered by expressions, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based filtering. This object enables filtering\nitems based on common attributes without requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define filtering rules that\ncan match against item attributes like name, type, value, date, and other properties.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Rules can be combined with AND/OR logic\n- Each rule can check a specific attribute\n- Multiple conditions can be applied (ranges, pattern matching, exact matches)\n\n**Common filter patterns**\n\n1. **Pattern matching**: Using wildcards like `*` and `?`\n2. **Value range filtering**: Numbers between min and max values\n3. **Date range filtering**: Items created/modified within specific time ranges\n4. **Status checking**: Items with specific status values or properties\n\nFor AI agents: Rule-based filtering should be your first choice when the filtering criteria\ncan be expressed in terms of standard attributes. Only use script-based filtering when\nmore complex logic is required.\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"1\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"1\"\nfor current implementations.\n","enum":["1"]},"rules":{"type":"array","description":"Expression array defining filter conditions using prefix notation. The first element is the operator,\nfollowed by its operands which may themselves be nested expression arrays.\n\nThe rule expression follows this pattern:\n- First element: Operator name (string)\n- Remaining elements: Operands for that operator (values or nested expressions)\n\n**Expression structure**\n\nFilter expressions use a prefix notation where operators appear before their operands:\n```\n[operator, operand1, operand2, ...]\n```\n\n**Comparison Operators**\n- `\"equals\"`: Exact match (equals)\n- `\"notequals\"`: Not equal to value (not equals)\n- `\"greaterthan\"`: Value is greater than specified value (is greater than)\n- `\"greaterthanequals\"`: Value is greater than or equal to specified value (is greater than or equals)\n- `\"lessthan\"`: Value is less than specified value (is less than)\n- `\"lessthanequals\"`: Value is less than or equal to specified value (is less than or equals)\n- `\"startswith\"`: String starts with specified prefix (starts with)\n- `\"endswith\"`: String ends with specified suffix (ends with)\n- `\"contains\"`: String contains specified substring (contains)\n- `\"doesnotcontain\"`: String does not contain specified substring (does not contain)\n- `\"isempty\"`: Field is empty or null (is empty)\n- `\"isnotempty\"`: Field contains a value (is not empty)\n- `\"matches\"`: Matches specified pattern (matches)\n\n**Logical Operators**\n- `\"and\"`: All conditions must be true\n- `\"or\"`: At least one condition must be true\n- `\"not\"`: Negates the condition\n\n**Field Access and Type Conversion**\n- `\"extract\"`: Access a field from the item by name\n- `\"settings\"`: Access a custom setting from the flow, flow step, or integration configuration\n- `\"boolean\"`: Convert value to Boolean type\n- `\"epochtime\"`: Convert value to Epoch Time (Unix timestamp)\n- `\"number\"`: Convert value to Number type\n- `\"string\"`: Convert value to String type\n\n**Field Access Details**\n\n**Using `extract` to access record fields:**\n- Retrieves values from the current record being processed\n- Can access nested properties using dot notation (e.g., `\"customer.email\"`)\n- Returns the raw field value which may need type conversion\n\n**Using `settings` to access configuration values:**\n- Retrieves values from the integration's configuration settings\n- Supports different scopes with prefix notation:\n  - `flow.settingName`: Access flow-level settings\n  - `export.settingName`: Access export-level settings\n  - `import.settingName`: Access import-level settings\n  - `integration.settingName`: Access integration-level settings\n- Useful for dynamic filtering based on configuration\n\n**Field Transformations**\n- `\"lowercase\"`: Convert string to lowercase\n- `\"uppercase\"`: Convert string to uppercase\n- `\"ceiling\"`: Round number up to the nearest integer\n- `\"floor\"`: Round number down to the nearest integer\n- `\"abs\"`: Get absolute value of a number\n\nType conversion operators are often necessary when comparing extracted field values against literals or when the field type doesn't match the comparison operator's expected type. For example:\n\n```json\n[\n  \"equals\",\n  [\n    \"number\",  // Convert to number before comparison\n    [\n      \"extract\",\n      \"quantity\"\n    ]\n  ],\n  100\n]\n```\n\nExample with datetime conversion:\n```json\n[\n  \"greaterthan\",\n  [\n    \"epochtime\",  // Convert to Unix timestamp before comparison\n    [\n      \"extract\",\n      \"createdDate\"\n    ]\n  ],\n  1609459200000  // January 1, 2021 as Unix timestamp in milliseconds\n]\n```\n\nExample with transformations:\n```json\n[\n  \"and\",\n  [\n    \"matches\",\n    [\n      \"lowercase\",  // Convert to lowercase before matching\n      [\n        \"string\",\n        [\n          \"extract\",\n          \"categories\"\n        ]\n      ]\n    ],\n    \"netsuite\"\n  ],\n  [\n    \"notequals\",\n    [\n      \"string\",\n      [\n        \"extract\",\n        \"recurrence.pattern.type\"\n      ]\n    ],\n    \"\"\n  ]\n]\n```\n\nExample comparing a record field with a flow setting:\n```json\n[\n  \"equals\",\n  [\n    \"string\",\n    [\n      \"extract\",\n      \"trantype\"\n    ]\n  ],\n  [\n    \"string\",\n    [\n      \"settings\",\n      \"flow.trantype\"\n    ]\n  ]\n]\n```\n\n**Examples**\n\nExample 1: Status field is not equal to \"cancelled\"\n```json\n[\n  \"notequals\",\n  [\n    \"extract\",\n    \"status\"\n  ],\n  \"cancelled\"\n]\n```\n\nExample 2: Filename starts with \"HC\"\n```json\n[\n  \"startswith\",\n  [\n    \"extract\",\n    \"filename\"\n  ],\n  \"HC\"\n]\n```\n\nExample 3: Amount is greater than 100\n```json\n[\n  \"greaterthan\",\n  [\n    \"number\",\n    [\n      \"extract\",\n      \"amount\"\n    ]\n  ],\n  100\n]\n```\n\nExample 4: Order date is after January 1, 2023\n```json\n[\n  \"greaterthan\",\n  [\n    \"extract\",\n    \"orderDate\"\n  ],\n  \"2023-01-01T00:00:00Z\"\n]\n```\n\nExample 5: Category contains any of [\"Urgent\", \"High Priority\"]\n```json\n[\n  \"anyof\",\n  [\n    \"extract\",\n    \"category\"\n  ],\n  [\"Urgent\", \"High Priority\"]\n]\n```\n","items":{"oneOf":[{"title":"String","type":"string"},{"title":"Number","type":"number"},{"title":"Boolean","type":"boolean"},{"title":"Object","type":"object"},{"title":"Array","type":"array"}]}}}},"script":{"type":"object","description":"Configuration for programmable script-based filtering. This object enables complex, custom\nfiltering logic beyond what expression-based filtering can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to determine which items\nshould be processed.\n\n**Implementation approach**\n\nScript-based filtering works by:\n1. Executing the specified function from the referenced script\n2. Passing item data to the function\n3. Using the function's return value (true/false) to determine inclusion\n\n**Common use cases**\n\nScript filtering is ideal for:\n- Complex business logic that can't be expressed as simple rules\n- Dynamic filtering criteria that change based on external factors\n- Content-based filtering that requires deep inspection\n- Advanced pattern matching beyond simple wildcards\n- Multi-stage filtering with intermediate logic\n\nFor AI agents: Only use script-based filtering when expression-based filtering is insufficient.\nScript filtering requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to the Script resource that contains the filtering logic. This must be a valid\nObjectId of a Script resource that exists in the system.\n\nThe referenced script must contain the function specified in the `function` field\nand must be written to handle filtering specifically. The script receives\nitem data as its input and must return a boolean value indicating whether\nto process the item (true) or skip it (false).\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n"},"function":{"type":"string","description":"Name of the function within the script to execute for filtering decisions. This function\nmust exist in the script referenced by _scriptId.\n\n**Function requirements**\n\nThe specified function must:\n- Accept item data as its first parameter\n- Return a boolean value (true to process the item, false to skip it)\n- Handle errors gracefully\n- Execute efficiently (as it may run for many items)\n\n**Function signature**\n\n```javascript\nfunction filterItems(itemData) {\n  // itemData contains properties of the item being evaluated\n  // Custom logic here\n  return true; // or false to skip the item\n}\n```\n\nFor AI agents: Ensure the function name exactly matches a function defined in the\nreferenced script, as mismatches will cause the filter to fail.\n"}}}}},"Http":{"type":"object","description":"Configuration for HTTP exports.\n\nIMPORTANT: When the _connectionId field points to a connection where the type is http, \nthis object MUST be populated for the export to function properly. This is a required configuration\nfor all HTTP based exports, as determined by the connection associated with the export.\n","properties":{"type":{"type":"string","enum":["file"],"description":"**Important:** This field should be LEFT UNDEFINED for the vast majority of HTTP exports.\n\nThis is an OPTIONAL field that should only be set in rare, specific cases. For standard REST API exports\n(Shopify, Salesforce, NetSuite, custom REST APIs, etc.), this field MUST be left undefined.\n\n**When to leave this field undefined (MOST common CASE)**\n\nLeave this field undefined for ALL standard data exports, including:\n- REST API exports that return JSON records\n- APIs that return XML records or structured data\n- Any export that retrieves business records, entities, or data objects\n- Standard CRUD operations that return record collections\n- GraphQL queries that return structured data\n- SOAP APIs that return structured responses\n\nExamples of exports that should have this field undefined:\n- \"Export all Shopify Customers\" → undefined (returns JSON customer records)\n- \"Retrieve orders from custom REST API\" → undefined (returns JSON order records)\n\n**When to set this field to 'file' (RARE use CASE)**\n\nSet this field to 'file' ONLY when the HTTP endpoint is specifically designed to download files:\n- The endpoint returns raw binary file content (PDFs, images, ZIP files, etc.)\n- The endpoint is a file download service (e.g., downloading invoices, reports, attachments)\n- The response body contains file data that needs to be saved as a file, not parsed as records\n- You need to download and process files from a remote server\n\nExamples of when to set type: \"file\":\n- \"Download PDF invoices from the API\" → type: \"file\"\n- \"Retrieve image files from a file server\" → type: \"file\"\n- \"Download CSV files from an FTP server via HTTP\" → type: \"file\"\n\n**Implementation details**\n\nWhen this field is set to 'file':\n- The 'file' object property MUST also be configured\n- The export appears as a \"Transfer\" step in the Flow Builder UI\n- The system applies file-specific processing to the HTTP response\n- Downstream steps receive file content rather than record data\n\nWhen this field is undefined (default for most exports):\n- The export appears as a standard \"Export\" step in the Flow Builder UI\n- The system parses the HTTP response as structured data (JSON, XML, etc.)\n- Downstream steps receive record data that can be mapped and transformed\n\n**Decision flowchart**\n\n1. Does the API endpoint return business records/entities (customers, orders, products, etc.)?\n   → YES: Leave this field undefined\n2. Does the API endpoint return structured data (JSON objects, XML records)?\n   → YES: Leave this field undefined\n3. Does the API endpoint return raw file content (PDFs, images, binary data)?\n   → YES: Set this field to \"file\" (and configure the 'file' property)\n\nRemember: When in doubt, leave this field undefined. Most HTTP exports are standard data exports.\n"},"method":{"type":"string","description":"HTTP method used for the export request to retrieve data from the target API.\n\n- GET: Most commonly used for data retrieval operations (default)\n- POST: Used when request body criteria are needed, especially for RPC or SOAP/XML APIs\n- PUT: Available for specific APIs that support it for data retrieval\n- PATCH/DELETE: Less common for exports but available for specialized use cases\n\nConsult your target API's documentation to determine the appropriate method.\n","enum":["GET","POST","PUT","PATCH","DELETE"]},"relativeURI":{"type":"string","description":"The resource path portion of the API endpoint used for this export.\n\nThis value is combined with the baseURI defined in the associated connection to form the complete API endpoint URL. \n\nThe entire relativeURI can be defined using handlebars expressions to create dynamic paths:\n\nExamples:\n- Simple resource paths: \"/products\", \"/orders\", \"/customers\"\n- With query parameters: \"/orders?status=pending\", \"/products?category=electronics&limit=100\"\n- With path parameters: \"/customers/{{record.customerId}}/orders\", \"/accounts/{{record.accountId}}/transactions\"\n- With dynamic query values: \"/orders?since={{lastExportDateTime}}\"\n- Fully dynamic path: \"{{record.dynamicPath}}\"\n\nPath parameters, query parameters, or the entire URI can be dynamically generated using handlebars syntax. This is particularly useful for parameterized API calls or when the endpoint needs to be determined at runtime based on data or context.\n\n**Lookup export behavior with mappings**\n\n**CRITICAL**: For lookup exports (isLookup: true) that have mappings configured, the handlebars template evaluation for relativeURI always uses the **original input record** before any mapping transformations are applied.\n\nThis design ensures that:\n- Mappings can transform the record structure for the request body without affecting URI construction\n- Essential fields like record IDs remain accessible for building dynamic endpoints\n- The request body can be optimized for the target API while preserving URI parameters\n\n**Example Scenario:**\n```\nInput record: {\"customerId\": \"12345\", \"name\": \"John Doe\", \"email\": \"john@example.com\"}\nMappings: Transform to {\"customer_name\": \"John Doe\", \"contact_email\": \"john@example.com\"}\nrelativeURI: \"/customers/{{record.customerId}}/details\"\nResult: \"/customers/12345/details\" (uses original customerId, not mapped version)\n```\n\nThis prevents situations where mapping transformations would remove or rename fields needed for endpoint construction, ensuring reliable API calls regardless of how the request body is structured.\n"},"headers":{"type":"array","description":"Export-specific HTTP headers to include with API requests. Note that common headers like authentication are typically defined on the connection record rather than here.\n\nUse this field only for headers that are specific to this particular export operation. Headers defined here will be merged with (and can override) headers from the connection.\n\nExamples of export-specific headers:\n- Accept: To request specific content format for this export only\n- X-Custom-Filter: Export-specific filtering parameters\n                \nHeader values can be defined using handlebars expressions if you need to reference any dynamic data or configurations.\n\nFor lookup exports (isLookup: true) with mappings configured, header value templates render against the **pre-mapped** record (the original input record from the upstream flow step) — mappings do not rewrite header evaluation.\n","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}}},"requestMediaType":{"type":"string","description":"Override request media type. Use this field to handle the use case where the HTTP request requires a different media type than what is configured on the connection.\n\nMost APIs use a consistent media type across all endpoints, which should be configured at the connection resource. Use this field only when:\n\n- This specific endpoint requires a different format than other endpoints in the API\n- You need to override the connection-level setting for this particular export only\n\nCommon values:\n- \"json\": For JSON request bodies (Content-Type: application/json)\n- \"xml\": For XML request bodies (Content-Type: application/xml)\n- \"urlencoded\": For URL-encoded form data (Content-Type: application/x-www-form-urlencoded)\n- \"form-data\": For multipart form data, typically used for file uploads\n- \"plaintext\": For plain text content\n","enum":["json","xml","urlencoded","form-data","plaintext"]},"body":{"type":"string","description":"The HTTP request body to send with POST, PUT, or PATCH requests. This field is typically used to:\n\n1. Send query parameters to APIs that require them in the request body (e.g., GraphQL or SOAP APIs)\n2. Provide filtering criteria for data exports\n\nThe body content must match the format specified in the requestMediaType field (JSON, XML, etc.).\n\nYou can use handlebars expressions to create dynamic content:\n```\n{\n  \"query\": \"SELECT Id, Name FROM Account WHERE LastModifiedDate > {{lastExportDateTime}}\",\n  \"parameters\": {\n    \"customerId\": \"{{record.customerId}}\",\n    \"limit\": 100\n  }\n}\n```\n\nFor XML or SOAP requests:\n```\n<request>\n  <filter>\n    <updatedSince>{{lastExportDateTime}}</updatedSince>\n    <type>{{record.type}}</type>\n  </filter>\n</request>\n```\n"},"successMediaType":{"type":"string","description":"Specifies the media type (content type) expected in successful responses for this specific export. This field should only be used when:\n\n1. The response format differs from the request format\n\nMost APIs return responses in the same format as the request, so this field is often unnecessary.\n\nCommon values:\n- \"json\": For JSON responses (typically with Content-Type: application/json)\n- \"xml\": For XML responses (typically with Content-Type: application/xml)\n- \"csv\": For CSV data (typically with Content-Type: text/csv)\n- \"plaintext\": For plain text responses\n","enum":["json","xml","csv","plaintext"]},"errorMediaType":{"type":"string","description":"Specifies the media type (content type) expected in error responses for this specific export. This field should only be used when:\n\n1. Error response format differs from the request format\n\nMost APIs return responses in the same format as the request, so this field is often unnecessary.\n\nCommon values:\n- \"json\": For JSON error responses (most common in modern APIs)\n- \"xml\": For XML error responses (common in SOAP and older REST APIs)\n- \"plaintext\": For plain text error messages\n","enum":["json","xml","plaintext"]},"_asyncHelperId":{"type":"string","format":"objectId","description":"Reference to an AsyncHelper resource that handles polling for long-running API operations.\n\nAsync helpers bridge Celigo's synchronous flow engine with asynchronous external APIs that use a \"fire-and-check-back\" pattern (HTTP 202 responses, job tickets, feed/document IDs, etc.).\n\nUse this field when the export needs to:\n- Submit a request to an API that processes data asynchronously\n- Poll for status at configured intervals\n- Retrieve results once the external process completes\n\nCommon use cases include:\n- Amazon SP-API feeds\n- Large report generators\n- File conversion services\n- Image processors\n- Any API that needs minutes or hours to complete a requested operation\n"},"once":{"type":"object","description":"HTTP configuration specific to Once exports. Used to mark records as exported after successful processing.","properties":{"relativeURI":{"type":"string","description":"The relative URI used to mark records as exported. Called as a callback to the source system after successful processing.\n\n- Must be a relative path starting with \"/\"\n- Can include Handlebars variables: \"/orders/{{record.Id}}/exported\"\n- Common patterns: dedicated status endpoint or record-specific updates\n- Renders against the **pre-mapped** record (original extracted record); mappings do not apply to the callback URI.\n"},"method":{"type":"string","description":"The HTTP method used when calling back to mark records as exported.","enum":["GET","PUT","POST","PATCH","DELETE"]},"body":{"type":"string","description":"The HTTP request body used when calling back to mark records as exported. Can include Handlebars expressions for dynamic values."}}},"paging":{"type":"object","description":"Configuration object for navigating through multi-page API responses.\n\n**Overview for ai agents**\n\nThis object is critical for retrieving large datasets that cannot be returned in a single API response.\nThe pagination implementation determines how the system will retrieve subsequent pages of data after\nthe first request, enabling complete data collection regardless of volume.\n\n**Key decision points**\n\n1. **Identify the API's pagination mechanism** (check API documentation)\n2. **Select the corresponding method** value (most important field)\n3. **Configure the required fields** based on your selected method\n4. **Add pagination variables** to your request configuration\n5. **Consider last page detection** options if needed\n\n**Field dependencies by pagination method**\n\n1. **page**: Page number-based pagination (e.g., ?page=2)\n    - Required: Set `method` to \"page\"\n    - Optional: `page` - Set if first page index is not 0 (e.g., set to 1 for APIs that start at page 1)\n    - Optional: `maxPagePath` - Path to find total pages in response\n    - Optional: `maxCountPath` - Path to find total records in response\n    - Optional: `relativeURI` - Only if subsequent page URLs differ from first page\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n2. **skip**: Offset/limit pagination (e.g., ?offset=100&limit=50)\n    - Required: Set `method` to \"skip\"\n    - Optional: `skip` - Set if first skip index is not 0\n    - Optional: `maxPagePath` - Path to find total pages in response\n    - Optional: `maxCountPath` - Path to find total records in response\n    - Optional: `relativeURI` - Only if subsequent page URLs differ from first page\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n3. **token**: Token-based pagination (e.g., ?page_token=abc123)\n    - Required: Set `method` to \"token\"\n    - Required: `path` - Location of the token in the response\n    - Required: `pathLocation` - Whether token is in \"body\" or \"header\"\n    - Optional: `token` - Set to provide initial token (rare)\n    - Optional: `pathAfterFirstRequest` - Only if token location changes after first page\n    - Optional: `relativeURI` - Only if subsequent page URLs differ from first page\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n4. **linkheader**: Link header pagination (uses HTTP Link header with rel values)\n    - Required: Set `method` to \"linkheader\"\n    - Optional: `linkHeaderRelation` - Set if relation is not the default \"next\"\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n5. **nextpageurl**: Complete next URL in response\n    - Required: Set `method` to \"nextpageurl\"\n    - Required: `path` - Location of the next URL in the response\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n6. **relativeuri**: Custom relative URI pagination\n    - Required: Set `method` to \"relativeuri\"\n    - Required: `relativeURI` - Configure using handlebars with previous_page context\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n7. **body**: Custom request body pagination\n    - Required: Set `method` to \"body\"\n    - Required: `body` - Configure using handlebars with previous_page context\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n**Pagination variables**\n\nBased on your selected method, you MUST add one of these variables to your request configuration:\n\n- For page-based: Add `{{export.http.paging.page}}` to the URI or body\n- For offset-based: Add `{{export.http.paging.skip}}` to the URI or body\n- For token-based: Add `{{export.http.paging.token}}` to the URI or body\n\n**Last page detection options**\n\nThese fields can be used with any pagination method to detect the last page:\n\n- `lastPageStatusCode` - Detect last page by HTTP status code\n- `lastPagePath` - JSON path to check for last page indicator\n- `lastPageValues` - Values at lastPagePath that indicate last page\n\n**Common implementation patterns**\n\nMost APIs require only 2-3 fields to be configured. The most common patterns are:\n\n```json\n// Page-based pagination (starting at page 1)\n{\n  \"method\": \"page\",\n  \"page\": 1\n}\n\n// Token-based pagination\n{\n  \"method\": \"token\",\n  \"path\": \"meta.nextToken\",\n  \"pathLocation\": \"body\"\n}\n\n// Link header pagination (simplest to configure)\n{\n  \"method\": \"linkheader\"\n}\n```\n\nIMPORTANT: Incorrect pagination configuration is one of the most common causes of incomplete data retrieval. Take time to properly identify and configure the correct pagination method for your API.\n","properties":{"method":{"type":"string","description":"Defines the pagination strategy that will be used to retrieve all data pages.\n\n**Importance for** AI AGENTS\n\nThis is the MOST CRITICAL field in pagination configuration. It determines:\n- Which other fields are required vs. optional\n- How subsequent pages will be requested\n- Which pagination variables must be used in requests\n- How the system detects the last page\n\n**Pagination methods and their requirements**\n\n**Page-Based** Pagination (`\"page\"`)\n```\n\"method\": \"page\"\n```\n- **Implementation**: Uses increasing page numbers (e.g., ?page=1, ?page=2)\n- **Required Setup**: Add `{{export.http.paging.page}}` to your URI or body\n- **Common Fields**: page (if starting at 1 instead of 0)\n- **API Examples**: Most REST APIs, Shopify, WordPress\n- **When to Use**: APIs that accept a page number parameter\n\n**Offset/Skip Pagination (`\"skip\"`)**\n```\n\"method\": \"skip\"\n```\n- **Implementation**: Uses increasing offset values (e.g., ?offset=0, ?offset=100)\n- **Required Setup**: Add `{{export.http.paging.skip}}` to your URI or body\n- **Common Fields**: Usually none (system handles offset increments)\n- **API Examples**: MongoDB, SQL-based APIs\n- **When to Use**: APIs that use offset/limit or skip/limit parameters\n\n**Token-Based Pagination (`\"token\"`)**\n```\n\"method\": \"token\"\n```\n- **Implementation**: Passes tokens from previous responses to get next pages\n- **Required Setup**: \n    1. Add `{{export.http.paging.token}}` to your URI or body\n    2. Set path to location of token in response\n    3. Set pathLocation to \"body\" or \"header\"\n- **API Examples**: AWS, Google Cloud, modern REST APIs\n- **When to Use**: APIs that provide continuation tokens/cursors\n\n**Link Header Pagination (`\"linkheader\"`)**\n```\n\"method\": \"linkheader\"\n```\n- **Implementation**: Follows URLs in HTTP Link headers automatically\n- **Required Setup**: None (simplest to configure)\n- **Common Fields**: Usually none (automatic)\n- **API Examples**: GitHub, GitLab, any API following RFC 5988\n- **When to Use**: APIs that return Link headers with rel=\"next\"\n\n**Next Page url (`\"nextpageurl\"`)**\n```\n\"method\": \"nextpageurl\"\n```\n- **Implementation**: Uses complete URLs returned in response body\n- **Required Setup**: Set path to location of next URL in response\n- **API Examples**: Some social media APIs, GraphQL implementations\n- **When to Use**: APIs that include complete next page URLs in responses\n\n**Custom relative URI (`\"relativeuri\"`)**\n```\n\"method\": \"relativeuri\"\n```\n- **Implementation**: Builds custom URIs based on previous responses\n- **Required Setup**: Configure relativeURI with handlebars templates\n- **When to Use**: Non-standard pagination requiring custom logic\n\n**Custom Request Body (`\"body\"`)**\n```\n\"method\": \"body\"\n```\n- **Implementation**: Creates custom request bodies for pagination\n- **Required Setup**: Configure body with handlebars templates\n- **API Examples**: GraphQL, SOAP, RPC APIs\n- **When to Use**: APIs requiring POST requests with pagination in body\n\n**Selection guidance**\n\nTo determine the correct method:\n1. Check the API documentation for pagination instructions\n2. Look for examples of multi-page requests in API samples\n3. Test with a small request to observe pagination mechanics\n4. Choose the method matching the API's expected behavior\n\nIMPORTANT: Using the wrong pagination method will result in either errors or incomplete data retrieval.\n","enum":["linkheader","page","skip","token","nextpageurl","relativeuri","body"]},"page":{"type":"integer","description":"Specifies the starting page number for page-based pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"page\"\n- OPTIONAL: Defaults to 0 if not provided\n- COMMON VALUES: 1 (most APIs), 0 (zero-indexed APIs)\n\n**Implementation guidance**\n\nThis field should be set when the API's first page is not zero-indexed. Most APIs use 1 as \ntheir first page number, in which case you should set:\n\n```json\n{\n  \"method\": \"page\",\n  \"page\": 1\n}\n```\n\nThe system will automatically increment this value for each subsequent page request.\n\n**Examples**\n\n- Shopify uses page=1 for first page\n- Some GraphQL APIs use page=0 for first page\n"},"skip":{"type":"integer","description":"Specifies the starting offset value for offset/skip-based pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"skip\"\n- OPTIONAL: Defaults to 0 if not provided\n- COMMON VALUES: 0 (vast majority of APIs)\n\n**Implementation guidance**\n\nThis field rarely needs to be set since most APIs use 0 as the starting offset.\nThe system will automatically increment this value by the pageSize for each subsequent request.\n\nExample calculation for page transitions:\n- First page: offset=0 (or your configured value)\n- Second page: offset=pageSize\n- Third page: offset=pageSize*2\n\n**When to use**\n\nOnly set this if the API requires a non-zero starting offset value, which is very uncommon.\n"},"token":{"type":"string","description":"Specifies an initial token value for token-based pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"token\"\n- OPTIONAL: Leave empty for normal pagination from the beginning\n- ADVANCED USE ONLY: Most implementations should NOT set this\n\n**Implementation guidance**\n\nToken-based pagination normally works by:\n1. Making the first request with no token\n2. Extracting a token from the response (using the path field)\n3. Using that token for the next request\n\nThis field should ONLY be set in rare scenarios:\n- Resuming a previous pagination sequence from a known token\n- APIs that require a token value even for the first request\n- Testing specific pagination scenarios\n\n**Example scenarios**\n\n```json\n// To resume pagination from a specific point:\n{\n  \"method\": \"token\",\n  \"path\": \"meta.nextToken\",\n  \"pathLocation\": \"body\",\n  \"token\": \"eyJwYWdlIjozfQ==\"\n}\n\n// For APIs requiring an initial token:\n{\n  \"method\": \"token\",\n  \"path\": \"pagination.nextToken\",\n  \"pathLocation\": \"body\",\n  \"token\": \"start\"\n}\n```\n"},"path":{"type":"string","description":"Specifies the location of pagination information in API responses.\n\n**Field behavior**\n\nThis field has different requirements based on the pagination method:\n\n- REQUIRED for method=\"token\":\n  Indicates where to find the token for the next page\n\n- REQUIRED for method=\"nextpageurl\":\n  Indicates where to find the complete URL for the next page\n\n- NOT USED for other pagination methods\n\n**Implementation guidance**\n\n**For token-based pagination (method=\"token\")**\n\n1. When pathLocation=\"body\":\n    - Set to a JSON path that points to the token in the response body\n    - Uses dot notation to navigate JSON objects\n    \n    Example response:\n    ```json\n    {\n      \"data\": [...],\n      \"meta\": {\n        \"nextToken\": \"abc123\"\n      }\n    }\n    ```\n    Correct path: \"meta.nextToken\"\n\n2. When pathLocation=\"header\":\n    - Set to the exact name of the HTTP header containing the token\n    - Case-sensitive, must match the header exactly\n    \n    Example header:\n    ```\n    X-Pagination-Token: abc123\n    ```\n    Correct path: \"X-Pagination-Token\"\n\n**For next page url pagination (method=\"nextpageurl\")**\n\n- Set to a JSON path that points to the complete URL in the response\n\nExample response:\n```json\n{\n  \"data\": [...],\n  \"pagination\": {\n    \"next_url\": \"https://api.example.com/data?page=2\"\n  }\n}\n```\nCorrect path: \"pagination.next_url\"\n\n**Common error** PATTERNS\n\n1. Missing dot notation: \"meta.nextToken\" not \"meta/nextToken\"\n2. Incorrect case: \"Meta.NextToken\" when API returns \"meta.nextToken\"\n3. Missing array indices when needed: \"items[0].next\" not \"items.next\"\n"},"pathLocation":{"type":"string","description":"Specifies where to find the pagination token in the API response.\n\n**Field behavior**\n\n- REQUIRED for method=\"token\"\n- NOT USED for other pagination methods\n- LIMITED to two possible values: \"body\" or \"header\"\n\n**Implementation guidance**\n\nWhen using token-based pagination, you must:\n1. Set method=\"token\"\n2. Set path to locate the token\n3. Set pathLocation to indicate where the token is found\n\n**When to use** \"body\":\n\nSet to \"body\" when the token is contained in the JSON response body.\nThis is the most common scenario for modern APIs.\n\nExample configuration:\n```json\n{\n  \"method\": \"token\",\n  \"path\": \"metadata.nextToken\",\n  \"pathLocation\": \"body\"\n}\n```\n\n**When to use \"header\"**\n\nSet to \"header\" when the token is returned as an HTTP header.\n\nExample configuration:\n```json\n{\n  \"method\": \"token\",\n  \"path\": \"X-Next-Page-Token\",\n  \"pathLocation\": \"header\" \n}\n```\n\n**Dependency chain**\n\nThis field participates in a critical dependency chain:\n\n1. Set method=\"token\"\n2. Set pathLocation=\"body\" or \"header\"\n3. Set path to token location based on pathLocation value\n4. Add {{export.http.paging.token}} to URI or body parameters\n\nAll four elements must be properly configured for token pagination to work.\n","enum":["body","header"]},"pathAfterFirstRequest":{"type":"string","description":"Specifies an alternative path for token extraction after the first page request.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"token\"\n- OPTIONAL: Only needed when token location changes after first page\n- Uses same format as the path field (JSON path or header name)\n\n**Implementation guidance**\n\nThis field should only be set when the API changes its response structure between\nthe first page and subsequent pages. Most APIs maintain consistent structure, but\nsome APIs may:\n\n1. Use different response formats for first vs. subsequent pages\n2. Move the token to a different location after the initial response\n3. Change the field name for the token in follow-up responses\n\nExample scenario where this is needed:\n```json\n// First page response:\n{\n  \"data\": [...],\n  \"meta\": {\n    \"initialNextToken\": \"abc123\"\n  }\n}\n\n// Subsequent page responses:\n{\n  \"data\": [...],\n  \"pagination\": {\n    \"nextToken\": \"def456\"\n  }\n}\n```\n\nIn this case:\n- path = \"meta.initialNextToken\" (for first page)\n- pathAfterFirstRequest = \"pagination.nextToken\" (for subsequent pages)\n\n**Dependency chain**\n\nThis field works in conjunction with the main path field:\n1. First request: token is extracted using the path field\n2. Subsequent requests: token is extracted using pathAfterFirstRequest\n\nIMPORTANT: Only set this field if you've verified that the API actually changes\nits response structure. Setting it unnecessarily can cause pagination to fail.\n"},"relativeURI":{"type":"string","description":"Override relative URI for subsequent page requests. This field appears as \"Override relative URI for subsequent page requests\" in the UI.\n\nThis field only needs to be set if subsequent page requests require a different relative URI than what is configured in the primary relative URI field. Most APIs use the same endpoint for all pages and vary only the query parameters, but some may require a completely different path for subsequent requests.\n\nYou can use handlebars expressions to reference data from the previous API response using the `previous_page` context object, which contains:\n\n- `previous_page.full_response` - The entire JSON response body from the previous request\n- `previous_page.last_record` - The last record from the previous page of results\n- `previous_page.headers` - All HTTP headers from the previous response\n\nCommon patterns include:\n- `{{previous_page.full_response.next_page}}` - Use a complete next page URL returned by the API\n- `/customers?page={{previous_page.full_response.page_count}}` - Use a page number from the response\n- `/orders?cursor={{previous_page.full_response.next_cursor}}` - Use a cursor/token from the response\n\nThe exact structure of data available depends on your specific API's response format.\n\nLeave this field empty if the main relative URI can be used for all page requests.\n"},"body":{"type":"string","description":"Override HTTP request body for subsequent page requests. This field appears as \"Override HTTP request body for subsequent page requests\" in the UI.\n\nThis field only needs to be set if subsequent page requests require a different HTTP request body than what is configured in the primary HTTP request body field. Most APIs use query parameters for pagination, but some (especially GraphQL or SOAP APIs) may require pagination parameters to be sent in the request body.\n\nYou can use handlebars expressions to reference data from the previous API response using the `previous_page` context object, which contains:\n\n- `previous_page.full_response` - The entire JSON response body from the previous request\n- `previous_page.last_record` - The last record from the previous page of results\n- `previous_page.headers` - All HTTP headers from the previous response\n\nCommon patterns include:\n- Including the next cursor in a GraphQL query: `{\"query\": \"...\", \"variables\": {\"cursor\": \"{{previous_page.full_response.pageInfo.endCursor}}\"}}`\n- Using the last record's ID: `{\"after\": \"{{previous_page.last_record.id}}\", \"limit\": 100}`\n- Including a page number: `{\"page\": {{previous_page.full_response.meta.next_page}}, \"pageSize\": 50}`\n\nThe exact structure of data available depends on your specific API's response format.\n\nLeave this field empty if the main HTTP request body can be used for all page requests.\n"},"linkHeaderRelation":{"type":"string","description":"Specifies which relation in the Link header to use for pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"linkheader\"\n- OPTIONAL: Defaults to \"next\" if not provided\n- Case-sensitive value matching the rel attribute in Link header\n\n**Implementation guidance**\n\nLink header pagination follows the RFC 5988 standard where pagination links\nare provided in HTTP headers. A typical Link header looks like:\n\n```\nLink: <https://api.example.com/items?page=2>; rel=\"next\", <https://api.example.com/items?page=1>; rel=\"prev\"\n```\n\nThis field allows you to specify which relation type to follow for pagination:\n\n```\n\"linkHeaderRelation\": \"next\"  // Default value\n```\n\nSome APIs use non-standard relation names, which is when you'd need to change this:\n\n```\n\"linkHeaderRelation\": \"successor\"  // Custom relation name\n```\n\n**Common values**\n\n- \"next\" (default): Standard for most RFC 5988 compliant APIs\n- \"successor\": Alternative used by some APIs\n- \"forward\": Alternative used by some APIs\n- \"nextpage\": Non-standard but used by some implementations\n\nIMPORTANT: This is case-sensitive and must exactly match the relation value in\nthe Link header. If the API includes the prefix \"rel=\" in the header, do NOT\ninclude it here.\n"},"resourcePath":{"type":"string","description":"Override path to records for subsequent page requests. This field appears as \"Override path to records for subsequent page requests\" in the UI.\n\nThis field only needs to be set if subsequent page requests return a different response structure, and the records are located in a different place than the original request.\n\nFor example, if the first request returns records in a structure like {\"data\": [...]} but subsequent page responses have records in {\"results\": [...]} instead, you would set this field to \"results\" to correctly extract data from the follow-up pages.\n\nLeave this field empty if all pages use the same response structure.\n"},"lastPageStatusCode":{"type":"integer","description":"Specifies a custom HTTP status code that indicates the last page of results.\n\n**Field behavior**\n\n- OPTIONAL: Only needed for APIs with non-standard last page indicators\n- Applies to all pagination methods\n- Overrides the default 404 end-of-pagination detection\n\n**Implementation guidance**\n\nBy default, the system treats a 404 status code as an indicator that\npagination is complete. This field allows you to specify a different\nstatus code if your API uses an alternative convention.\n\nCommon scenarios where this is needed:\n\n1. APIs that return 204 (No Content) for empty result sets\n```\n\"lastPageStatusCode\": 204\n```\n\n2. APIs that return 400 (Bad Request) when requesting beyond available pages\n```\n\"lastPageStatusCode\": 400\n```\n\n3. APIs with custom error codes for pagination completion\n```\n\"lastPageStatusCode\": 499\n```\n\n**Technical details**\n\nWhen this status code is received, the system:\n- Stops the pagination process\n- Considers the data collection complete\n- Does not treat the response as an error\n- Does not attempt to process any response body\n\nIMPORTANT: Only set this if your API explicitly uses a non-404 status code\nto indicate the end of pagination. Setting this incorrectly could cause\npremature termination of data collection or error handling issues.\n"},"lastPagePath":{"type":"string","description":"Specifies a JSON path to a field that indicates the end of pagination.\n\n**Field behavior**\n\n- OPTIONAL: Only needed for APIs with field-based pagination completion signals\n- Works with all pagination methods\n- Used in conjunction with lastPageValues\n- JSON path notation to a field in the response body\n\n**Implementation guidance**\n\nThis field is used when an API indicates the last page through a field\nin the response body rather than using HTTP status codes. The system\nchecks this path in each response to determine if pagination is complete.\n\nCommon patterns include:\n\n1. Boolean flag fields\n```\n\"lastPagePath\": \"meta.isLastPage\"\n```\n\n2. \"Has more\" indicators\n```\n\"lastPagePath\": \"pagination.hasMore\"\n```\n\n3. Cursor/token fields that are null/empty on the last page\n```\n\"lastPagePath\": \"meta.nextCursor\"\n```\n\n4. Error message fields\n```\n\"lastPagePath\": \"error.message\"\n```\n\n**Dependency chain**\n\nThis field must be used with lastPageValues, which specifies the value(s)\nat this path that indicate pagination is complete. For example:\n\n```json\n\"lastPagePath\": \"pagination.hasMore\",\n\"lastPageValues\": [\"false\", \"0\"]\n```\n\nIMPORTANT: The path is evaluated against each response using JSON path notation.\nIf the path doesn't exist in the response, the condition is not considered met.\n"},"lastPageValues":{"type":"array","description":"Specifies which value(s) at the lastPagePath indicate the end of pagination.\n\n**Field behavior**\n\n- REQUIRED when lastPagePath is used\n- Array of string values (even for boolean or numeric comparisons)\n- Case-sensitive matching against the value at lastPagePath\n- Multiple values create an OR condition (any match indicates last page)\n\n**Implementation guidance**\n\nThis field works in conjunction with lastPagePath to determine when\npagination is complete. The system looks for the field specified by\nlastPagePath and compares its value against each entry in this array.\n\nCommon patterns include:\n\n1. For boolean \"isLastPage\" flags (true means last page)\n```json\n\"lastPagePath\": \"meta.isLastPage\",\n\"lastPageValues\": [\"true\"]\n```\n\n2. For \"hasMore\" flags (false means last page)\n```json\n\"lastPagePath\": \"pagination.hasMore\",\n\"lastPageValues\": [\"false\", \"0\"]\n```\n\n3. For empty cursors (null/empty string means last page)\n```json\n\"lastPagePath\": \"meta.nextCursor\",\n\"lastPageValues\": [\"null\", \"\"]\n```\n\n4. For specific error messages\n```json\n\"lastPagePath\": \"error.message\",\n\"lastPageValues\": [\"No more pages\", \"End of results\"]\n```\n\n**Technical details**\n\n- All values must be specified as strings, even for boolean or numeric comparisons\n- JSON null should be represented as the string \"null\"\n- Empty string is represented as \"\"\n- The comparison is exact and case-sensitive\n\nIMPORTANT: This field is only considered when the lastPagePath exists in the\nresponse. Both lastPagePath and lastPageValues must be configured correctly\nfor proper pagination termination.\n","items":{"type":"string"}},"maxPagePath":{"type":"string","description":"Specifies a JSON path to a field containing the total number of pages available.\n\n**Field behavior**\n\n- OPTIONAL: Only relevant for \"page\" and \"skip\" pagination methods\n- JSON path to a numeric field in the response\n- Used to optimize pagination by detecting the last page early\n- Ignored for other pagination methods\n\n**Implementation guidance**\n\nThis field enables pagination optimization when an API includes metadata\nabout the total number of pages. When configured, the system:\n\n1. Extracts the total page count from each response\n2. Compares the current page number against this total\n3. Stops pagination when the maximum page is reached\n\nCommon API response patterns include:\n\n```json\n// Pattern 1: Metadata section with page counts\n{\n  \"data\": [...],\n  \"meta\": {\n    \"totalPages\": 5,\n    \"currentPage\": 2\n  }\n}\n\n// Pattern 2: Pagination object\n{\n  \"results\": [...],\n  \"pagination\": {\n    \"pageCount\": 5,\n    \"page\": 2\n  }\n}\n\n// Pattern 3: Root level pagination info\n{\n  \"items\": [...],\n  \"pages\": 5,\n  \"current\": 2\n}\n```\n\n**Usage scenarios**\n\nMost useful when:\n- The API reliably includes total page counts\n- You want to prevent unnecessary requests after the last page\n- The 404/last page detection mechanisms aren't suitable\n\nIMPORTANT: This field should point to the TOTAL number of pages,\nnot the current page number. The value must be numeric (integer).\n"},"maxCountPath":{"type":"string","description":"Specifies a JSON path to a field containing the total number of records available.\n\n**Field behavior**\n\n- OPTIONAL: Only relevant for \"page\" and \"skip\" pagination methods\n- JSON path to a numeric field in the response\n- Alternative to maxPagePath for record-based termination\n- Used when APIs provide total record count instead of page count\n\n**Implementation guidance**\n\nThis field enables pagination optimization when an API includes metadata\nabout the total number of records rather than pages. When configured,\nthe system:\n\n1. Extracts the total record count from each response\n2. Tracks the total number of records processed so far\n3. Stops pagination when all records have been processed\n\nCommon API response patterns include:\n\n```json\n// Pattern 1: Metadata section with record counts\n{\n  \"data\": [...],\n  \"meta\": {\n    \"totalCount\": 42,\n    \"page\": 2,\n    \"pageSize\": 10\n  }\n}\n\n// Pattern 2: Pagination object\n{\n  \"results\": [...],\n  \"pagination\": {\n    \"total\": 42,\n    \"offset\": 20,\n    \"limit\": 10\n  }\n}\n\n// Pattern 3: Root level count info\n{\n  \"items\": [...],\n  \"count\": 42,\n  \"page\": 2\n}\n```\n\n**Relationship with maxPagePath**\n\nThis field is an alternative to maxPagePath:\n- Use maxPagePath when the API provides a total page count\n- Use maxCountPath when the API provides a total record count\n- If both are provided, maxPagePath takes precedence\n\nIMPORTANT: This field should point to the TOTAL number of records,\nnot the number of records in the current page. The value must be\nnumeric (integer).\n"}}},"response":{"type":"object","description":"Configuration for parsing and interpreting HTTP responses returned by the source API.\n\nThis object tells the export engine how to extract records from the API response body\nand how to detect success or failure at the response level.\n\n**Most important field:** resourcePath\n\n`resourcePath` is the single most commonly needed field in this object. When an API\nwraps its records inside a JSON envelope, you MUST set resourcePath to the dot-path\nthat points to the array of records. Without it, the export treats the entire response\nas a single record.\n\nExample API response:\n```json\n{\n  \"status\": \"ok\",\n  \"data\": {\n    \"customers\": [\n      {\"id\": 1, \"name\": \"Alice\"},\n      {\"id\": 2, \"name\": \"Bob\"}\n    ]\n  }\n}\n```\n→ Set `resourcePath` to `data.customers` so the export produces 2 records.\n\n**When to leave this object undefined**\n\nIf the API returns a bare JSON array (e.g. `[{\"id\":1}, {\"id\":2}]`) with no\nwrapper object, you do not need this object at all.\n","properties":{"resourcePath":{"type":"string","description":"The dot-separated path to the array of records inside the API response body.\n\n**Critical field for correct data extraction**\n\nMost APIs wrap their data in an envelope object. This field tells the export\nwhere to find the actual records within that envelope. Without this field,\nthe export treats the entire response body as a single record, which is\nalmost never the desired behavior when the response has a wrapper.\n\n**How it works**\n\nGiven an API response like:\n```json\n{\n  \"meta\": {\"page\": 1, \"total\": 42},\n  \"results\": [\n    {\"id\": \"A\", \"value\": 10},\n    {\"id\": \"B\", \"value\": 20}\n  ]\n}\n```\nSetting `resourcePath` to `results` causes the export to produce 2 records\n(`{\"id\":\"A\",\"value\":10}` and `{\"id\":\"B\",\"value\":20}`).\n\nFor deeply nested responses:\n```json\n{\n  \"slideshow\": {\n    \"slides\": [{\"title\": \"Slide 1\"}, {\"title\": \"Slide 2\"}]\n  }\n}\n```\nSet `resourcePath` to `slideshow.slides` to get each slide as a record.\n\n**When to set this field**\n\n- The API response is a JSON object (not a bare array) and the records are\n  nested inside it → set this to the path\n- The API response is a bare JSON array → leave undefined (records are\n  already at the top level)\n\n**Common patterns**\n\n| API response structure | resourcePath value |\n|---|---|\n| `{\"data\": [...]}` | `data` |\n| `{\"results\": [...]}` | `results` |\n| `{\"items\": [...]}` | `items` |\n| `{\"records\": [...]}` | `records` |\n| `{\"response\": {\"data\": [...]}}` | `response.data` |\n| `{\"slideshow\": {\"slides\": [...]}}` | `slideshow.slides` |\n| `[...]` (bare array) | leave undefined |\n\n**Important distinction**\n\nThis field extracts records from the **API response**. Do NOT confuse it with:\n- `oneToMany` + `pathToMany` — which unwrap child arrays from *input records*\n  in lookup/import steps (a completely different mechanism)\n- `paging.resourcePath` — which overrides the record location for *subsequent*\n  page requests only (when follow-up pages use a different response structure)\n"},"resourceIdPath":{"type":"string","description":"Path to the unique identifier field within each individual record in the response.\n\nUsed primarily when processing results of asynchronous import responses.\nIf not specified, the system looks for standard `id` or `_id` fields automatically.\n"},"successPath":{"type":"string","description":"Path to a field in the response that indicates whether the API call succeeded.\n\nUse this when the API returns HTTP 200 for all requests but signals success or\nfailure through a field in the response body.\n\nMust be used together with `successValues` to define which values at this path\nindicate success.\n\nExample: If the API returns `{\"status\": \"ok\", \"data\": [...]}`, set\n`successPath` to `status` and `successValues` to `[\"ok\"]`.\n"},"successValues":{"type":"array","items":{"type":"string"},"description":"Values at the `successPath` location that indicate the API call was successful.\n\nWhen the value at `successPath` matches any entry in this array, the response\nis treated as successful. If the value does not match, the response is treated\nas an error.\n\nAll values are compared as strings. For boolean fields, use `\"true\"` or `\"false\"`.\n"},"errorPath":{"type":"string","description":"Path to the error message field in the response body.\n\nUsed to extract a meaningful error message when the API returns an error\nresponse. The value at this path is included in error logs and error records.\n"},"failPath":{"type":"string","description":"Path to a field that identifies a failed response even when the HTTP status code is 200.\n\nSimilar to `successPath` but inverted logic — checks for failure indicators.\nMust be used together with `failValues`.\n"},"failValues":{"type":"array","items":{"type":"string"},"description":"Values at the `failPath` location that indicate the API call failed.\n\nWhen the value at `failPath` matches any entry in this array, the response\nis treated as a failure even if the HTTP status code was 200.\n"},"blobFormat":{"type":"string","description":"Character encoding for blob export responses.\n\nOnly relevant when the export type is \"blob\" (http.type = \"file\" or\nexport type = \"blob\"). Specifies how to decode the binary response body.\n","enum":["utf8","ucs2","utf-16le","ascii","binary","base64","hex"]}}},"_httpConnectorVersionId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector version being used."},"_httpConnectorResourceId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector resource being used."},"sendAuthForFileDownloads":{"type":"boolean","description":"Whether to include authentication headers when downloading files."},"_httpConnectorEndpointId":{"type":"string","format":"objectId","description":"_httpConnectorEndpointId: Identifier for the HTTP connector endpoint used in the integration or API call. This identifier uniquely specifies which HTTP connector endpoint configuration should be utilized to route and process the request within the system. It ensures that API calls are directed to the correct external or internal HTTP service endpoint as defined in the integration setup.\n\n**Field behavior**\n- Uniquely identifies the HTTP connector endpoint within the system to ensure precise routing.\n- Used during the execution of integrations or API calls to select the appropriate HTTP endpoint configuration.\n- Typically required and must be specified when configuring or invoking HTTP-based connectors.\n- Once set, it generally remains immutable to maintain consistent routing behavior.\n- Acts as a key reference to the endpoint’s configuration, including URL, headers, authentication, and other settings.\n\n**Implementation guidance**\n- Must correspond to a valid and existing HTTP connector endpoint identifier within the system.\n- Validate the identifier against the current list of configured HTTP connector endpoints before use to prevent errors.\n- Avoid changing the identifier after initial assignment to prevent routing inconsistencies.\n- Ensure that the endpoint configuration referenced by this ID is active and properly configured.\n- Implement error handling for cases where the identifier does not match any existing endpoint.\n\n**Examples**\n- \"endpoint_12345\"\n- \"httpConnectorEndpoint_abcde\"\n- \"prodHttpEndpoint01\"\n- \"dev_api_gateway_2024\"\n- \"externalServiceEndpoint-01\"\n\n**Important notes**\n- This field is critical for directing API calls to the correct HTTP endpoint; incorrect values can cause failed connections or routing errors.\n- Missing or invalid identifiers will typically result in integration failures or exceptions.\n- Proper permissions and access controls must be in place to use the specified HTTP connector endpoint.\n- Changes to the endpoint configuration referenced by this ID may affect all integrations relying on it.\n- The identifier should be managed carefully in deployment and version control processes.\n\n**Dependency chain**\n- Depends on the existence and configuration of HTTP connector endpoints within the system.\n- May be linked to authentication, authorization, and security settings associated with the endpoint.\n- Integration logic or API call workflows depend on this identifier to resolve the correct HTTP endpoint.\n- Changes in endpoint configurations or identifiers may require updates in dependent integrations.\n\n**Technical details**\n- Data type: String\n- Format: Alphanumeric identifier, typically allowing underscores (_), dashes (-), and possibly other URL-safe characters.\n- Stored as a reference or foreign key linking to the HTTP connector"}}},"Salesforce":{"type":"object","description":"Configuration object for Salesforce data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a Salesforce connection\nand must not be included for other connection types. It defines how data is extracted\nfrom Salesforce, either through queries or real-time events.\n\nFor optimal AI agent implementation, consider these guidelines:\n\n**Salesforce export modes**\n\nSalesforce exports offer two fundamentally different operating modes:\n\n1. **SOQL Query-based Exports** (type=\"soql\")\n    - Scheduled or on-demand batch processing\n    - Uses SOQL queries to retrieve data\n    - Supports both REST and Bulk API\n    - Can be configured as lookups (isLookup=true)\n    - Requires the \"soql\" object with query configuration\n\n2. **Real-time Event Listeners** (type=\"distributed\")\n    - Responds to Salesforce events as they happen\n    - Uses Salesforce's streaming API and platform events\n    - Always appears as a \"Listener\" in the flow builder UI\n    - Requires the \"distributed\" object with event configuration\n\n3. **File/Blob Exports** (when export.type=\"blob\")\n    - Retrieves files stored in Salesforce\n    - Requires sObjectType and id fields\n    - Supports Attachments, ContentVersion, and Document objects\n\n**Implementation requirements**\n\nThe salesforce object has conditional requirements based on the selected type:\n\n- For SOQL exports (type=\"soql\"):\n  Required fields: type, soql.query\n  Optional fields: api, includeDeletedRecords, bulk (when api=\"bulk\")\n\n- For Distributed exports (type=\"distributed\"):\n  Required fields: type, distributed configuration\n  Optional fields: distributed.referencedFields, distributed.qualifier\n\n- For Blob exports (when export.type=\"blob\"):\n  Required fields: sObjectType, id\n","properties":{"type":{"type":"string","description":"Defines the fundamental data extraction method for Salesforce exports.\n\n    **Field behavior**\n\nThis field determines the core operating mode of the Salesforce export:\n\n- REQUIRED for all Salesforce exports\n- Controls which additional configuration objects must be provided\n- Affects how the export appears and functions in the flow builder UI\n- Cannot be changed after creation without significant reconfiguration\n\n**Available types**\n\n**Soql Query-based Export**\n```\n\"type\": \"soql\"\n```\n\n- **Behavior**: Executes SOQL queries against Salesforce on schedule or demand\n- **UI Appearance**: \"Export\" or \"Lookup\" based on isLookup value\n- **Required Config**: Must provide the \"soql\" object with a valid query\n- **Use Cases**: Batch data extraction, delta synchronization, data migration\n- **Dependencies**:\n  - Compatible with both \"rest\" and \"bulk\" API options\n  - Works with standard, delta, test, and once export types\n\n**Real-time Event Listener**\n```\n\"type\": \"distributed\"\n```\n\n- **Behavior**: Listens for real-time Salesforce events (create/update/delete)\n- **UI Appearance**: Always appears as a \"Listener\" in the flow builder\n- **Required Config**: Must provide the \"distributed\" object with event configuration\n- **Use Cases**: Real-time synchronization, event-driven integration\n- **Dependencies**:\n  - Only uses REST API (api field is ignored)\n  - Automatically configured with trigger logic in Salesforce\n  - Only compatible with standard export type (ignores delta/test/once)\n\n**Implementation considerations**\n\nThe type selection creates a fundamental difference in how data flows:\n\n- \"soql\" operates on a pull model where the integration initiates data retrieval\n- \"distributed\" operates on a push model where Salesforce events trigger the integration\n\nIMPORTANT: Choose \"soql\" for batch processing and lookups; choose \"distributed\" for\nreal-time event handling. This decision affects all other configuration aspects.\n","enum":["soql","distributed"]},"sObjectType":{"type":"string","description":"Specifies the Salesforce object type for the export operation.\n\n**Field behavior**\n\nThis field determines which Salesforce object is being exported:\n\n- **REQUIRED** when the parent export's type is \"distributed\"\n- **REQUIRED** when the parent export's type is \"blob\"\n- Optional for \"soql\" exports (can be inferred from the SOQL query)\n- Must be a valid Salesforce object API name\n\n**Use cases by export type**\n\n**Distributed Exports**\n```\n\"sObjectType\": \"Account\"\n\"sObjectType\": \"Contact\"\n\"sObjectType\": \"Opportunity\"\n\"sObjectType\": \"Custom_Object__c\"\n```\n\n- **Purpose**: Specifies the primary object type being exported from Salesforce\n- **Valid Values**: Any standard or custom Salesforce object (Account, Contact, Opportunity, Lead, Case, Custom_Object__c, etc.)\n- **API Access**: Uses the specified object's metadata and SOQL/REST APIs\n- **Use Cases**: Real-time distributed processing of Salesforce records\n- **Requirements**: Object must exist in the connected Salesforce org\n\n**Blob/File Exports**\n```\n\"sObjectType\": \"Attachment\"\n\"sObjectType\": \"ContentVersion\"\n\"sObjectType\": \"Document\"\n```\n\n- **Purpose**: Specifies which Salesforce file storage object contains the file data\n- **Valid Values**: File storage objects only (Attachment, ContentVersion, Document)\n- **API Access**: Uses file-specific APIs for data retrieval\n- **Use Cases**: Extracting files and binary data from Salesforce\n- **Requirements**: Must be used with the \"id\" field to specify the file record\n\n**Soql Exports**\n```\n\"sObjectType\": \"Account\"  // Optional - can be inferred from query\n```\n\n- **Purpose**: Optional hint about the primary object in the SOQL query\n- **Valid Values**: Any Salesforce object referenced in the query\n- **Use Cases**: Query optimization and metadata context\n\n**Implementation notes**\n\nFor distributed exports, this field is essential for:\n- Setting up proper event listeners and triggers\n- Configuring field metadata and validation\n- Enabling related object processing\n- Determining appropriate API endpoints\n\nFor blob exports, this field works with the \"id\" field to retrieve specific file records.\n\nIMPORTANT: The object specified must exist in the target Salesforce org and be accessible\nto the integration user account.\n"},"id":{"type":"string","description":"Specifies the Salesforce record ID of the file to retrieve for blob exports.\n\n**Field behavior**\n\nThis field identifies the specific file record in Salesforce:\n\n- REQUIRED when the parent export's type is \"blob\"\n- Must be a valid Salesforce ID or a handlebars expression\n- Used in conjunction with sObjectType to retrieve the file\n- Not used for regular data exports (type=\"soql\" or \"distributed\")\n\n**Implementation patterns**\n\n**Static File id**\n```\n\"id\": \"00P5f00000ZQcTZEA1\"\n```\n\n- References a specific, fixed file in Salesforce\n- Useful for retrieving standard documents or templates\n- Always retrieves the same file on each execution\n- Simple to configure but lacks flexibility\n\n**Dynamic File id (Handlebars)**\n```\n\"id\": \"{{record.Attachment_Id__c}}\"\n```\n\n- References a file ID from input data using handlebars\n- Requires the export to be used as a lookup (isLookup=true)\n- Dynamically determines which file to retrieve at runtime\n- Allows for contextual file retrieval based on previous steps\n\n**Technical details**\n\n- For ContentVersion objects, this should be the ContentVersion ID\n- For Attachment objects, this should be the Attachment ID\n- For Document objects, this should be the Document ID\n\nIMPORTANT: Salesforce IDs are 15 or 18 characters, case-sensitive for 15-character\nversions, and case-insensitive for 18-character versions. When using handlebars,\nensure the referenced field contains a valid Salesforce ID.\n"},"includeDeletedRecords":{"type":"boolean","description":"Controls whether the export retrieves records from the Salesforce Recycle Bin.\n\n**Field behavior**\n\nThis field enables access to recently deleted records:\n\n- OPTIONAL: Defaults to false if not specified\n- Only relevant for SOQL exports (type=\"soql\")\n- Ignored for distributed exports and blob exports\n- Changes the underlying API method used for queries\n\n**Implementation impact**\n\nWhen set to true:\n- Salesforce's queryAll() API method is used instead of query()\n- Records in the Recycle Bin (deleted within the past 15 days) are included\n- Each record contains an \"IsDeleted\" field to identify deleted status\n- API usage may be higher as queryAll() counts differently against limits\n\n**Use cases**\n\nThis field is particularly useful for:\n\n- Synchronizing deletion operations to target systems\n- Building data recovery/rollback mechanisms\n- Maintaining a complete audit trail including deleted records\n- Implementing soft-delete patterns across integrated systems\n\n**Technical considerations**\n\n- Records in the Recycle Bin are only available for up to 15 days\n- Hard-deleted records (emptied from Recycle Bin) are not accessible\n- The IsDeleted field should be checked to identify deleted records\n- May increase response size and processing time slightly\n\nIMPORTANT: This feature only works with SOQL exports (type=\"soql\") and is ignored\nfor distributed exports (type=\"distributed\") since those operate on events rather\nthan queries.\n","default":false},"api":{"type":"string","description":"Specifies which Salesforce API to use for retrieving data.\n\n**Field behavior**\n\nThis field controls the underlying API technology:\n\n- OPTIONAL: Defaults to \"rest\" if not specified\n- Only relevant for SOQL exports (type=\"soql\")\n- Ignored for distributed exports and blob exports\n- Determines performance characteristics and compatibility\n\n**Available APIs**\n\n**Rest api**\n```\n\"api\": \"rest\"\n```\n\n- **Performance**: Optimized for immediate response and smaller datasets\n- **Concurrency**: Higher - multiple queries can run simultaneously\n- **Data Volume**: Best for <10,000 records\n- **Use Cases**: Lookups, real-time queries, smaller datasets\n- **Special Features**: Required for lookup exports (isLookup=true)\n\n**Bulk api 2.0**\n```\n\"api\": \"bulk\"\n```\n\n- **Performance**: Optimized for large data volumes, higher throughput\n- **Concurrency**: Lower - utilizes a job queuing system\n- **Data Volume**: Best for >=10,000 records\n- **Use Cases**: Large data migrations, full dataset exports, reports\n- **Special Features**: Requires \"bulk\" object configuration for settings\n\n**Dependencies and constraints**\n\n- When isLookup=true, api must be set to \"rest\" (or left as default)\n- When api=\"bulk\", the bulk object can be configured for additional options\n- Bulk API introduces slight processing latency but handles larger volumes\n- REST API provides immediate results but may time out with very large queries\n\n**Selection guidance**\n\nChoose based on your data volume and response time needs:\n\n- For smaller datasets (<10,000 records) or lookups: use \"rest\"\n- For larger datasets or background processing: use \"bulk\"\n- When immediacy is critical: use \"rest\"\n- When throughput is critical: use \"bulk\"\n\nIMPORTANT: The Bulk API is not compatible with lookup exports (isLookup=true).\nIf your export is configured as a lookup, you must use the REST API.\n","enum":["rest","bulk"]},"bulk":{"type":"object","description":"Configuration parameters for Salesforce Bulk API 2.0 exports.\n\n**Field behavior**\n\nThis object contains settings specific to Bulk API operations:\n\n- REQUIRED when api=\"bulk\" and type=\"soql\"\n- Ignored when api=\"rest\" or type=\"distributed\"\n- Controls behavior of Salesforce Bulk API jobs\n- Provides optimization options for large data volumes\n\n**Implementation context**\n\nThe Bulk API operates differently from REST API:\n- Creates asynchronous jobs in Salesforce\n- Processes records in batches for higher throughput\n- Optimized for transferring large datasets\n- Has different governor limits and behavior\n","properties":{"maxRecords":{"type":"integer","description":"Specifies the maximum number of records to retrieve in a single Bulk API job.\n\n**Field behavior**\n\nThis field controls query result size:\n\n- OPTIONAL: Uses Salesforce's default if not specified\n- Sets the `maxRecords` parameter on Bulk API requests\n- Only applicable when api=\"bulk\" and type=\"soql\"\n- Helps prevent timeouts with complex queries or large record sizes\n\n**Technical considerations**\n\n- Different Salesforce editions have different limits\n- Values too high may cause timeouts with complex records\n- Values too low may require multiple API calls\n- Standard objects typically support higher limits than custom objects\n\n**Optimization guidance**\n\n- For simple records (few fields): Higher values improve throughput\n- For complex records (many fields): Lower values prevent timeouts\n- For standard objects: 50,000 is usually safe\n- For custom objects: 10,000-25,000 is recommended\n\nIMPORTANT: The Salesforce Bulk API 2.0 has a hard limit of 100 million records\nper job, but practical limits are typically much lower based on record complexity\nand Salesforce instance capacity.\n","minimum":10000},"purgeJobAfterExport":{"type":"boolean","description":"Controls whether Bulk API jobs are automatically deleted after completion.\n\n**Field behavior**\n\nThis field manages job cleanup:\n\n- OPTIONAL: Defaults to false if not specified\n- When true, deletes the Bulk API job after all data is retrieved\n- Only applicable when api=\"bulk\" and type=\"soql\"\n- Has no effect on the actual data retrieval or results\n\n**Implementation impact**\n\nWhen enabled (true):\n- Reduces clutter in the Salesforce Bulk Data Load Jobs UI\n- Prevents accumulation of completed jobs\n- May help stay under job retention limits\n- Makes job details unavailable for later troubleshooting\n\nWhen disabled (false):\n- Preserves job history for troubleshooting\n- Allows reviewing job details in Salesforce\n- May accumulate many jobs over time\n\n**Best practices**\n\n- For production environments: Set to true for cleanliness\n- For testing/development: Set to false for easier debugging\n- For audit-heavy environments: Set to false if job history is needed\n\nIMPORTANT: This setting only affects job metadata cleanup in Salesforce.\nIt has no impact on the actual data retrieved or the success of the export.\n"}}},"soql":{"type":"object","description":"Configuration for SOQL query-based Salesforce exports.\n\n**Field behavior**\n\nThis object contains the SOQL query settings:\n\n- REQUIRED when type=\"soql\"\n- Not used when type=\"distributed\" or for blob exports\n- Controls what data is retrieved from Salesforce\n- Works with both REST API and Bulk API methods\n\n**Implementation requirements**\n\nThe soql object must include a valid query that follows Salesforce SOQL syntax.\nThe query determines:\n- Which objects are accessed\n- Which fields are retrieved\n- What filtering conditions are applied\n- How results are sorted and limited\n","properties":{"query":{"type":"string","description":"The SOQL query that defines what data to retrieve from Salesforce.\n\n**Field behavior**\n\nThis field contains the actual SOQL statement:\n\n- REQUIRED when type=\"soql\"\n- Must follow Salesforce Object Query Language syntax\n- Passed directly to Salesforce API endpoints\n- Can include dynamic values via handlebars\n\n**Query structure elements**\n\nA complete SOQL query typically includes:\n\n**Field Selection**\n```\nSELECT Id, Name, Email, Phone, Account.Name\n```\n- List specific fields to retrieve\n- Include relationship fields using dot notation\n- Use * sparingly (only with specific sObjects that support it)\n\n**Object Selection**\n```\nFROM Contact\n```\n- Specifies the Salesforce object to query\n- Must be a valid API name (not label)\n- Case-sensitive (match Salesforce API names exactly)\n\n**Filter Conditions**\n```\nWHERE LastModifiedDate > {{lastExportDateTime}}\nAND IsActive = true\n```\n- Limits which records are returned\n- Can reference handlebars variables (e.g., for delta exports)\n- Supports standard operators (=, !=, >, <, LIKE, IN, etc.)\n\n**Relationship Queries**\n```\nSELECT Account.Id, (SELECT Id, FirstName FROM Contacts)\nFROM Account\n```\n- Retrieves parent and child records in a single query\n- Helps reduce API calls for related data\n- Supports both lookup and master-detail relationships\n\n**Implementation best practices**\n\n- Select only the fields you need (improves performance)\n- Use WHERE clauses to limit data volume\n- For delta exports, use LastModifiedDate with {{lastExportDateTime}}\n- Use ORDER BY for consistent results across multiple pages\n- Avoid SOQL functions in filters when using Bulk API\n\n**Technical limits**\n\n- Maximum query length: 20,000 characters\n- Maximum relationships traversed: 5 levels\n- Maximum subquery levels: 1 (no nested subqueries)\n- Maximum batch size varies by API (REST: 2,000, Bulk: 10,000+)\n\nIMPORTANT: When using relationship queries, child objects count against\ngovernor limits differently. For bulk processing of many parent-child records,\nconsider separate queries or the oneToMany export setting.\n","maxLength":200000}}},"distributed":{"type":"object","description":"Configuration for real-time Salesforce event-driven exports.\n\n**Field behavior**\n\nThis object defines real-time event listener settings:\n\n- REQUIRED when type=\"distributed\"\n- Not used when type=\"soql\" or for blob exports\n- Creates push-based integration triggered by Salesforce events\n- Implements real-time processing of creates, updates, and deletes\n\n**Implementation context**\n\nDistributed exports work fundamentally differently from SOQL exports:\n- No scheduling or manual execution required\n- Triggered automatically when records change in Salesforce\n- Data flows in real-time as events occur\n- Uses Salesforce's platform events and streaming API\n\n**Technical architecture**\n\nWhen configured, the system:\n1. Creates custom triggers in the connected Salesforce org\n2. Establishes event listeners for the specified objects\n3. Processes events as they occur (create/update/delete operations)\n4. Delivers the changed records to the integration flow\n","properties":{"referencedFields":{"type":"array","description":"Specifies additional fields to retrieve from related objects via relationships.\n\n**Field behavior**\n\nThis field extends the data retrieval beyond the primary object:\n\n- OPTIONAL: If omitted, only direct fields are retrieved\n- Each entry specifies a field on a related object using dot notation\n- Values are included in the exported record data\n- Only works with lookup and master-detail relationships\n\n**Implementation patterns**\n\n**Parent Object Fields**\n```\n[\"Account.Name\", \"Account.Industry\", \"Account.BillingCity\"]\n```\n- Retrieves fields from parent objects\n- Useful for including context from parent records\n- Common for child objects like Contacts, Opportunities\n\n**User/Owner Fields**\n```\n[\"Owner.Email\", \"CreatedBy.Name\", \"LastModifiedBy.Username\"]\n```\n- Retrieves fields from standard user relationship fields\n- Provides attribution information\n- Useful for auditing and notification scenarios\n\n**Custom Relationship Fields**\n```\n[\"Custom_Lookup__r.Field_Name__c\", \"Another_Relation__r.Status__c\"]\n```\n- Works with custom relationship fields\n- Uses __r suffix for the relationship name\n- Can access standard or custom fields on the related object\n\n**Technical considerations**\n\n- Maximum 10 unique referenced relationships per export\n- Each referenced field counts against Salesforce API limits\n- Fields must be accessible to the connected user\n- Performance impact increases with each additional relationship\n\nIMPORTANT: Referenced fields are retrieved via separate API calls,\nwhich can impact performance with large numbers of records or relationships.\nOnly include fields that are actually needed by your integration.\n","items":{"type":"string"}},"disabled":{"type":"boolean","description":"Controls whether this real-time event listener is active.\n\n**Field behavior**\n\nThis field enables/disables event processing:\n\n- OPTIONAL: Defaults to false if not specified\n- When true, prevents the export from processing any events\n- Preserves configuration while temporarily stopping execution\n- Can be toggled without removing the entire export\n\n**Use cases**\n\nThis field is particularly useful for:\n\n- Temporarily pausing real-time integration during maintenance\n- Testing event configuration without processing\n- Creating standby event handlers for disaster recovery\n- Controlling traffic during peak business periods\n\n**Implementation notes**\n\nWhen disabled (true):\n- Events are NOT queued - they are completely ignored\n- No data will flow through this export\n- The Salesforce triggers remain in place but are inactive\n- No impact on Salesforce performance or API limits\n\nIMPORTANT: When disabled, events that occur will NOT be processed retroactively\nwhen re-enabled. Consider using a delta export for catching up on missed changes\nafter extended disabled periods.\n"},"qualifier":{"type":"string","description":"A filter expression that determines which Salesforce events are processed.\n\n**Field behavior**\n\nThis field provides server-side filtering:\n\n- OPTIONAL: If omitted, all events for the object are processed\n- Uses Salesforce formula syntax for filtering\n- Evaluated before events are sent to the integration platform\n- Can reference any field on the triggering record\n\n**Implementation patterns**\n\n**Simple Field Comparisons**\n```\n\"Status__c = 'Approved'\"\n```\n- Processes events only when specific field values match\n- Most efficient filtering approach\n- Can use =, !=, >, <, >=, <= operators\n\n**Logical Conditions**\n```\n\"Amount > 1000 AND Status__c = 'New'\"\n```\n- Combines multiple conditions with AND, OR operators\n- Can use parentheses for complex grouping\n- Allows precise control over which events trigger the integration\n\n**Formula Functions**\n```\n\"CONTAINS(Description, 'Priority') OR ISCHANGED(Status__c)\"\n```\n- Uses Salesforce formula functions\n- ISCHANGED detects specific field modifications\n- ISNEW, ISDELETED detect record lifecycle events\n\n**Performance impact**\n\nThe qualifier is evaluated in Salesforce before sending events:\n- Reduces network traffic and processing\n- Lowers integration platform load\n- More efficient than filtering in a subsequent flow step\n- No additional API calls required\n\nIMPORTANT: The qualifier is evaluated using the Salesforce formula engine.\nUse valid Salesforce formula syntax and reference only fields that exist\non the primary object being monitored.\n"},"batchSize":{"type":"integer","description":"Controls how many records are processed together in each real-time batch.\n\n**Field behavior**\n\nThis field affects event processing efficiency:\n\n- OPTIONAL: Uses system default if not specified\n- Valid range: 4 to 200 records per batch\n- Affects how events are grouped before processing\n- Balance between latency and throughput\n\n**Performance considerations**\n\n**Smaller Batch Sizes (4-20)**\n```\n\"batchSize\": 10\n```\n- Lower latency - events processed more immediately\n- More overhead for small numbers of records\n- Better for time-sensitive operations\n- More resilient for complex record processing\n\n**Larger Batch Sizes (50-200)**\n```\n\"batchSize\": 100\n                    - Higher throughput - better efficiency for many records\n- Slight increase in processing delay\n- Better for high-volume operations\n- More efficient use of API calls and resources\n\n**Implementation guidance**\n\nChoose based on your volume and timing requirements:\n\n- For high-volume objects (many changes per minute): Use larger batches\n- For time-sensitive operations: Use smaller batches\n- For complex processing logic: Use smaller batches\n- For efficiency and throughput: Use larger batches\n\nIMPORTANT: The batch size doesn't limit how many records can be processed\nin total, only how they're grouped for processing. All events will eventually\nbe processed regardless of batch size.\n","minimum":4,"maximum":200},"skipExportFieldId":{"type":"string","description":"Specifies a boolean field that prevents integration loops in bidirectional sync.\n\n**Field behavior**\n\nThis field provides a loop prevention mechanism:\n\n- OPTIONAL: If omitted, no loop prevention is applied\n- Must reference a valid boolean/checkbox field on the object\n- Field must be updateable via the Salesforce API\n- System automatically manages the field's value\n\n**Implementation mechanism**\n\nThe loop prevention works as follows:\n\n1. When your integration updates a record in Salesforce\n2. The system temporarily sets this field to true\n3. The update triggers Salesforce's normal event system\n4. But events where this field is true are ignored\n5. The system automatically clears the field afterward\n\n**Use cases**\n\nThis field is critical for:\n\n- Bidirectional synchronization scenarios\n- Preventing infinite update loops\n- Implementing changes that flow both ways\n- Distinguishing between user changes and integration changes\n\n**Field requirements**\n\nThe field you specify must be:\n- A checkbox (Boolean) field in Salesforce\n- Created specifically for integration purposes\n- Not used by other business processes\n- Updateable by the integration user\n\nIMPORTANT: For bidirectional sync scenarios, this field is required.\nWithout it, updates from your integration would trigger events that\ncould create infinite loops between systems.\n"},"relatedLists":{"type":"array","description":"Configuration for retrieving child records related to the primary object.\n\n**Field behavior**\n\nThis field enables parent-child data synchronization:\n\n- OPTIONAL: If omitted, only the primary record is processed\n- Each array entry configures one related list/child object\n- Child records are included with their parent in the payload\n- Automatically retrieves child records when parent changes\n\n**Implementation context**\n\nThis feature allows you to:\n- Synchronize complete object hierarchies in real-time\n- Include child records when a parent record changes\n- Process parent-child data together in a single flow\n- Maintain relationships between objects across systems\n\n**Technical impact**\n\n- Each related list requires additional Salesforce API calls\n- Performance impact increases with each related list\n- Data volume can increase significantly with many children\n- Parent-child structures may require special handling in flows\n","items":{"type":"object","description":"Configuration for a single related list (child object) to include.\n\nEach object in this array defines how to retrieve one type of\nchild records related to the primary object. Multiple related lists\ncan be configured to retrieve different types of children.\n","properties":{"referencedFields":{"type":"array","description":"Specifies which fields to retrieve from the child records.\n\n**Field behavior**\n\nThis field selects child record fields:\n\n- REQUIRED for each related list configuration\n- Must contain valid API field names for the child object\n- Only listed fields will be retrieved from child records\n- Empty array will retrieve only Id field\n\n**Implementation guidance**\n\n- Include only fields needed by your integration\n- Always include key identifier fields\n- Consider relationship fields if needed\n- Balance between completeness and performance\n\nIMPORTANT: Each field increases data volume and processing time.\nOnly include fields that your integration actually needs to process.\n","items":{"type":"string"}},"parentField":{"type":"string","description":"Specifies the field on the child object that relates back to the parent.\n\n**Field behavior**\n\nThis field identifies the relationship:\n\n- REQUIRED for each related list configuration\n- Must be a lookup or master-detail field on the child object\n- References the parent object being exported\n- Used to construct the relationship query\n\n**Relationship field patterns**\n\n**Standard Relationships**\n```\n\"parentField\": \"AccountId\"\n```\n- For standard parent-child relationships\n- Field name typically ends with \"Id\"\n- References standard objects\n\n**Custom Relationships**\n```\n\"parentField\": \"Parent_Object__c\"\n```\n- For custom parent-child relationships\n- Field name typically ends with \"__c\"\n- References custom objects\n\n**Technical details**\n\nThe system uses this field to construct a query like:\n```\nSELECT [referencedFields] FROM [sObjectType]\nWHERE [parentField] = [parent record Id]\n```\n\nIMPORTANT: This must be the exact API name of the field on the child\nobject that creates the relationship to the parent, not the relationship\nname itself.\n"},"sObjectType":{"type":"string","description":"Specifies the API name of the child object to retrieve.\n\n**Field behavior**\n\nThis field identifies the child object type:\n\n- REQUIRED for each related list configuration\n- Must be a valid Salesforce API object name\n- Case-sensitive (match Salesforce naming exactly)\n- Can be standard or custom object\n\n**Object name patterns**\n\n**Standard Objects**\n```\n\"sObjectType\": \"Contact\"\n```\n- Standard Salesforce objects\n- No namespace or suffix\n- First letter capitalized\n\n**Custom Objects**\n```\n\"sObjectType\": \"Custom_Object__c\"\n```\n- Custom Salesforce objects\n- API name with \"__c\" suffix\n- Case-sensitive, including underscores\n\n**Relationship compatibility**\n\nThe sObjectType must:\n- Have a relationship field to the parent object\n- Be accessible to the connected user\n- Support standard SOQL queries\n\nIMPORTANT: Use the exact API name of the object, not its label.\nThis value is case-sensitive and must match Salesforce's naming exactly.\n"},"filter":{"type":"string","description":"Optional SOQL WHERE clause to filter which child records are included.\n\n**Field behavior**\n\nThis field adds filtering to child record retrieval:\n\n- OPTIONAL: If omitted, all related child records are included\n- Contains only the condition expression (without \"WHERE\" keyword)\n- Uses standard SOQL syntax for conditions\n- Applied in addition to the parent relationship filter\n\n**Filtering patterns**\n\n**Simple Condition**\n```\n\"filter\": \"IsActive = true\"\n```\n- Basic field comparison\n- Only active related records are included\n\n**Multiple Conditions**\n```\n\"filter\": \"Status__c = 'Open' AND Priority = 'High'\"\n```\n- Combined conditions with logical operators\n- Only records matching all conditions are included\n\n**Complex Filtering**\n```\n\"filter\": \"CreatedDate > LAST_N_DAYS:30 OR IsClosed = false\"\n```\n- Can use Salesforce date literals and functions\n- Can mix different types of conditions\n\n**Technical details**\n\nThe system appends this to the automatically generated relationship query:\n```\nSELECT [fields] FROM [sObjectType]\nWHERE [parentField] = [parent ID] AND ([filter])\n```\n\nIMPORTANT: Do not include the \"WHERE\" keyword in this field.\nOnly include the condition expression itself, as it will be combined\nwith the parent relationship condition automatically.\n"},"orderBy":{"type":"string","description":"Optional SOQL ORDER BY clause to sort the child records.\n\n**Field behavior**\n\nThis field controls child record ordering:\n\n- OPTIONAL: If omitted, order is determined by Salesforce\n- Contains only field and direction (without \"ORDER BY\" keywords)\n- Uses standard SOQL syntax for sorting\n- Applied to the child records query\n\n**Ordering patterns**\n\n**Single Field Ascending (Default)**\n```\n\"orderBy\": \"Name\"\n```\n- Sorts by a single field in ascending order\n- ASC is implied if not specified\n\n**Single Field Descending**\n```\n\"orderBy\": \"CreatedDate DESC\"\n```\n- Sorts by a single field in descending order\n- Must explicitly specify DESC\n\n**Multiple Fields**\n```\n\"orderBy\": \"Priority DESC, CreatedDate ASC\"\n```\n- Sorts by multiple fields in specified directions\n- Comma-separated list of fields with optional directions\n\n**Technical details**\n\nThe system appends this to the automatically generated relationship query:\n```\nSELECT [fields] FROM [sObjectType]\nWHERE [parentField] = [parent ID]\nORDER BY [orderBy]\n```\n\nIMPORTANT: Do not include the \"ORDER BY\" keywords in this field.\nOnly include the field names and sort directions, as they will be\nadded to the query with the proper syntax automatically.\n"}}}}}}}},"AS2":{"type":"object","description":"Configuration for AS2 (Applicability Statement 2) exports and listeners.\n\n**What is AS2?**\n\nApplicability Statement 2 (AS2) is a widely adopted protocol for securely and reliably transmitting\nEDI and other data types over the internet using HTTP/S, S/MIME encryption, and digital signatures.\nAS2 provides:\n\n- **Message integrity** through digital signature validation\n- **Confidentiality** via encryption with X.509 certificates\n- **Non-repudiation** via Message Disposition Notifications (MDNs)\n\n**As2 export configuration**\n\nIMPORTANT: When the _connectionId field points to a connection where the type is as2,\nthis object MUST be populated for the export to function properly. This is a required configuration\nfor all AS2 based exports, as determined by the connection associated with the export.\n\n**As2 listener functionality**\n\nAn AS2 listener is a flow step in Celigo designed to receive incoming AS2 transmissions\nand deliver them into a defined integration flow. It acts as the \"source\" of a flow—similar to\nhow a webhook listener works—except it specifically handles AS2 protocol requirements, including\ndecryption, signature verification, and MDN generation.\n\nUnlike periodic polling or scheduled exports, an AS2 listener functions in near real-time—when\na trading partner pushes an AS2 message, Celigo's listener step processes it instantly,\ngenerating an MDN in response to acknowledge receipt. This ensures low-latency, event-driven\nprocessing where each inbound AS2 transmission triggers the integration flow automatically.\n","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"Reference to a TradingPartnerConnector document.\n\n**Trading partner connector overview**\n\nA Trading Partner Connector in Celigo's integrator.io is a prebuilt, partner-specific integration\ntemplate that streamlines the setup and management of Electronic Data Interchange (EDI) transactions\nwith a designated trading partner. It encapsulates all requisite configurations:\n\n- Communication protocol (e.g., AS2, FTP/SFTP, VAN)\n- Document schemas (such as ANSI X12 or EDIFACT)\n- Mappings\n- Validation rules\n- Endpoint details\n\n**Benefits**\n\nBy referencing a Trading Partner Connector through this field, organizations:\n\n- Reduce manual setup time\n- Ensure compliance with specific partner requirements\n- Take advantage of Celigo's out-of-the-box EDI capabilities\n- Process transactions reliably and securely\n- Onboard new partners rapidly without building flows from scratch\n\nThis field is crucial for AS2 configurations as it links the export to all partner-specific\nsettings required for successful AS2 communication.\n"},"blob":{"type":"boolean","description":"- **Behavior**: Retrieves raw files without parsing them into structured data records.  Should only be used when the contents of the file will not be used in subsequent steps.\n- **UI Appearance**: \"Transfer\" flow step\n- **Required Config**: Configuration only available on AS2 and VAN exports (as2.blob = true)\n- **Use Case**: Raw file transfers for binary files or when parsing is handled downstream\n- **Important Note**: Use this when you want to handle the file as a raw blob without automatic parsing\n"}},"required":[]},"DynamoDB":{"type":"object","description":"Configuration object for Amazon DynamoDB data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a DynamoDB connection\nand must not be included for other connection types. It defines how data is extracted\nfrom DynamoDB tables, using query operations against NoSQL data structures.\n\n**Implementation requirements**\n\nThe DynamoDB object has the following requirements:\n\n- For basic exports:\n  Required fields: region, method, tableName, expressionAttributeNames, expressionAttributeValues, keyConditionExpression\n  Optional fields: filterExpression, projectionExpression\n\n- For Once exports (when export.type=\"once\"):\n  Additional required field: onceExportPartitionKey\n  Optional field: onceExportSortKey (for composite keys)\n","properties":{"region":{"type":"string","enum":["us-east-1","us-east-2","us-west-1","us-west-2","af-south-1","ap-east-1","ap-south-1","ap-northeast-1","ap-northeast-2","ap-northeast-3","ap-southeast-1","ap-southeast-2","ca-central-1","eu-central-1","eu-west-1","eu-west-2","eu-west-3","eu-south-1","eu-north-1","me-south-1","sa-east-1"],"description":"Specifies the AWS region where the DynamoDB table is located.\n\n**Field behavior**\n\nThis field determines where to connect to DynamoDB:\n\n- REQUIRED for all DynamoDB exports\n- Must match the region where your DynamoDB table is deployed\n- Select the same AWS region used in your database configuration\n- Ensures the integration can access your table\n","default":"us-east-1"},"method":{"type":"string","enum":["query"],"description":"Defines the DynamoDB operation method used to retrieve data.\n\n**Field behavior**\n\n- REQUIRED for all DynamoDB exports\n- Currently only supports \"query\" operations\n- Always set this value to \"query\"\n- Additional methods may be supported in future versions\n"},"tableName":{"type":"string","description":"Specifies the DynamoDB table from which to retrieve data.\n\n**Field behavior**\n\nThis field identifies the data source:\n\n- REQUIRED for all DynamoDB exports\n- Must be an exact match to an existing table name\n- Case-sensitive as per AWS naming conventions\n- Cannot be changed without recreating the export\n\n**Implementation patterns**\n\n**Standard Table Names**\n```\n\"tableName\": \"Customers\"\n```\n"},"keyConditionExpression":{"type":"string","description":"Defines the search criteria to determine which items to retrieve from DynamoDB.\n\n**Field behavior**\n\n- REQUIRED when method=\"query\"\n- Must include a condition on the partition key\n- Can optionally include conditions on the sort key\n- Uses placeholders defined in expressionAttributeNames and expressionAttributeValues\n\n**Common patterns**\n\n```\n\"#pk = :pkValue\"                                  // Partition key only\n\"#pk = :pkValue AND #sk = :skValue\"               // Exact match on partition and sort key\n\"#pk = :pkValue AND #sk BETWEEN :start AND :end\"  // Range query on sort key\n\"#pk = :pkValue AND begins_with(#sk, :prefix)\"    // Prefix match on sort key\n```\n\nPlaceholders with '#' reference attribute names, while ':' reference values.\n"},"filterExpression":{"type":"string","description":"Filters the results from a query based on non-key attributes.\n\n**Field behavior**\n\n- OPTIONAL: If omitted, all items matching the key condition are returned\n- Applied after the key condition but before returning results\n- Can reference any non-key attributes to further refine results\n- Uses placeholders defined in expressionAttributeNames and expressionAttributeValues\n\n**Examples**\n\n```\n\"#status = :active\"\n\"#status = :active AND #price > :minPrice\"\n\"contains(#tags, :tagValue)\"\n```\n\nRefer to the DynamoDB documentation for the complete list of valid operators and syntax.\n"},"projectionExpression":{"type":"array","items":{"type":"string"},"description":"Specifies which fields to return from each item in the results.\n\n**Field behavior**\n\n- OPTIONAL: If omitted, all fields are returned\n- Each array element represents a field to include\n- References attribute names defined in expressionAttributeNames\n- Reduces data transfer by returning only needed fields\n\n**Examples**\n\n```\n[\"#id\", \"#name\", \"#email\"]               // Basic fields\n[\"#id\", \"#profile.#firstName\"]           // Nested fields\n[\"#id\", \"#items[0]\", \"#items[1]\"]        // List elements\n```\n\nRefer to the DynamoDB documentation for more details on projection syntax.\n"},"expressionAttributeNames":{"type":"string","description":"Defines placeholders for attribute names used in expressions.\n\n**Field behavior**\n\n- REQUIRED when using expressions that reference attribute names\n- Must be a valid JSON string mapping placeholders to actual attribute names\n- Each placeholder must begin with a pound sign (#) followed by alphanumeric characters\n- Used in keyConditionExpression, filterExpression, and projectionExpression\n\n**Example**\n\n```\n\"{\\\"#pk\\\": \\\"customerId\\\", \\\"#status\\\": \\\"status\\\"}\"\n```\n\nThis maps the placeholder #pk to the actual attribute name \"customerId\" and #status to \"status\".\n\nRefer to the DynamoDB documentation for more details.\n"},"expressionAttributeValues":{"type":"string","description":"Defines placeholder values used in expressions for comparison.\n\n**Field behavior**\n\n- REQUIRED when using expressions that compare attribute values\n- Must be a valid JSON string mapping placeholders to actual values\n- Each placeholder must begin with a colon (:) followed by alphanumeric characters\n- Used in keyConditionExpression and filterExpression\n- Can contain static values or dynamic values with handlebars syntax\n\n**Example**\n\n```\n\"{\\\":customerId\\\": \\\"12345\\\", \\\":status\\\": \\\"ACTIVE\\\"}\"\n```\n\nThis maps the placeholder :customerId to the value \"12345\" and :status to \"ACTIVE\".\n\nRefer to the DynamoDB documentation for more details.\n"},"onceExportPartitionKey":{"type":"string","description":"Specifies the partition key attribute for identifying items in once exports.\n\n**Field behavior**\n\n- REQUIRED when export.type=\"once\"\n- Must specify the primary key that uniquely identifies each item in the table\n- Celigo uses this to track which items have been processed\n- After successful export, Celigo updates a tracking field in the database\n\nThis is needed for once exports to prevent duplicate processing of the same items\nin subsequent runs by marking them as processed.\n\nRefer to the DynamoDB documentation for more details on partition keys.\n"},"onceExportSortKey":{"type":"string","description":"Specifies the sort key attribute for identifying items in composite key tables.\n\n**Field behavior**\n\n- OPTIONAL: Only needed for tables with composite primary keys\n- Used together with onceExportPartitionKey for tables where items are identified by both keys\n- Celigo uses both keys to uniquely identify items that have been processed\n- For tables with only a partition key (simple primary key), leave this empty\n\nThis is only required if your DynamoDB table uses a composite primary key\n(partition key + sort key) to uniquely identify items.\n\nRefer to the DynamoDB documentation for more details on sort keys.\n"}}},"FTP":{"type":"object","description":"Configuration object for FTP/SFTP connection settings in export integrations.\n\nThis object is REQUIRED when the _connectionId field references an FTP/SFTP connection\nand must not be included for other connection types. It defines how to locate and retrieve\nfiles from FTP, FTPS, or SFTP servers.\n\nThe FTP export object has the following requirements:\n\n- Required fields: directoryPath\n- Optional fields: fileNameStartsWith, fileNameEndsWith, backupDirectoryPath, _tpConnectorId\n\n**Purpose**\n\nThis configuration specifies:\n- Which directory to retrieve files from\n- How to filter files by name patterns\n- Where to move files after retrieval (optional)\n- Any trading partner-specific connection settings\n","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"References a Trading Partner Connector for standardized B2B integrations.\n\n**Field behavior**\n\nThis field links to pre-configured trading partner settings:\n\n- OPTIONAL: If omitted, uses only the FTP connection details\n- References a Celigo Trading Partner Connector by _id\n- When specified, inherits partner-specific configurations\n"},"directoryPath":{"type":"string","description":"Directory on the FTP/SFTP server to retrieve files from.\n\n- REQUIRED for all FTP exports\n- Can be relative to login directory or absolute path\n- Supports handlebars templates (e.g., `archive/{{date 'YYYY-MM-DD'}}`)\n- Use forward slashes (/) regardless of server OS\n- Path is case-sensitive on UNIX/Linux servers\n\nIMPORTANT: The FTP user must have read permissions on this directory.\n"},"fileNameStartsWith":{"type":"string","description":"Optional prefix filter for filenames.\n\n- Filters files based on starting characters\n- Case-sensitive on most FTP servers\n- Can use static text or handlebars templates\n- Examples:\n  - `\"ORDER_\"` - matches ORDER_123.csv but not order_123.csv\n  - `\"INV_{{date 'YYYYMMDD'}}\"` - matches current date's invoices\n\nWhen used with fileNameEndsWith, files must match both criteria.\n"},"fileNameEndsWith":{"type":"string","description":"Optional suffix filter for filenames.\n\n- Commonly used to filter by file extension\n- Case-sensitive on most FTP servers\n- Examples:\n  - `\".csv\"` - retrieves only CSV files\n  - `\"_FINAL.xml\"` - retrieves only XML files with _FINAL suffix\n  - `\"_READY\"` - retrieves files with status indicator\n\nWhen used with fileNameStartsWith, files must match both criteria.\n"},"backupDirectoryPath":{"type":"string","description":"Optional directory where files are moved before deletion.\n\n- If omitted, files are deleted from the original location after successful export\n- Must be on the same FTP/SFTP server\n- Supports static paths or handlebars templates\n- Examples:\n  - `\"processed\"` - simple archive folder\n  - `\"archive/{{date 'YYYY/MM/DD'}}\"` - date-based hierarchy\n\nIMPORTANT: Celigo automatically deletes files from the source directory after\nsuccessful export. The backup directory is for users who want to maintain their\nown independent backup of exported files. Celigo also maintains its own backup\nof processed files for a set period of time.\n"}}},"JDBC":{"type":"object","description":"Configuration object for JDBC (Java Database Connectivity) data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a JDBC database connection\nand must not be included for other connection types. It defines how data is extracted\nfrom relational databases using SQL queries.\n\n**Jdbc export capabilities**\n- Execute custom SQL SELECT statements\n- Support for joins, aggregations, and functions\n- Flexible data retrieval from any accessible tables or views\n- Compatible with all major database systems\n**Critical:** WHAT BELONGS IN THIS OBJECT\n- `query` - **ALWAYS REQUIRED** - The SQL SELECT statement\n- `once` - **REQUIRED** when the export's Object Type is `\"once\"` (set _include_once: true)\n- **DO NOT** put `delta` inside this object - delta is handled via the query\n\n**Delta exports (type: \"delta\")**\nFor delta/incremental exports, do NOT populate a `delta` object inside `jdbc`.\nInstead, use `{{lastExportDateTime}}` or `{{currentExportDateTime}}` directly in the query:\n```json\n{\n  \"type\": \"delta\",\n  \"jdbc\": {\n    \"query\": \"SELECT * FROM customers WHERE updatedAt > {{lastExportDateTime}}\"\n  }\n}\n```\n\n**Once exports (type: \"once\")**\nFor once exports (mark records as processed), populate `jdbc.once.query`:\n```json\n{\n  \"type\": \"once\",\n  \"jdbc\": {\n    \"query\": \"SELECT * FROM customers WHERE exported = false\",\n    \"once\": {\n      \"query\": \"UPDATE customers SET exported = true WHERE id = {{record.id}}\"\n    }\n  }\n}\n```\n\n**Standard exports (type: null or not specified)**\nJust provide the query:\n```json\n{\n  \"jdbc\": {\n    \"query\": \"SELECT * FROM customers WHERE status = 'ACTIVE'\"\n  }\n}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - SQL SELECT query for retrieving data from the database.\n\nThis field contains the SQL SELECT statement that will be executed to fetch data\nfrom the database. The query can range from simple table selections to complex\njoins and aggregations.\n\nExamples:\n- Basic: `SELECT id, name, email FROM customers WHERE status = 'ACTIVE'`\n- Join: `SELECT o.id, c.name, o.amount FROM orders o JOIN customers c ON o.customer_id = c.id`\n- Aggregate: `SELECT category, COUNT(*) as count FROM orders GROUP BY category`\n- Parameterized: `SELECT * FROM orders WHERE customer_id = {{record.customer_id}}`\n\n**For delta exports (when top-level type is \"delta\")**\nInclude `{{lastExportDateTime}}` or `{{currentExportDateTime}}` in the WHERE clause:\n- `SELECT * FROM customers WHERE updatedAt > {{lastExportDateTime}}`\n- `SELECT * FROM orders WHERE modified_date >= {{lastExportDateTime}}`\n"},"once":{"type":"object","description":"**CRITICAL: REQUIRED when the export's Object Type is \"once\".**\n\nIf Object Type is \"once\", you MUST set _include_once to true (or include this object).\n\nThis object has ONLY ONE property: \"query\" (a SQL UPDATE string).\nDO NOT create any other properties like \"update\", \"table\", \"set\", \"where\", etc.\n\nCORRECT format:\n```json\n{\"query\": \"UPDATE customers SET exported=true WHERE id={{record.id}}\"}\n```\n\nWRONG format (DO NOT DO THIS):\n```json\n{\"update\": {\"table\": \"customers\", \"set\": {...}}}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - A SQL UPDATE statement string to mark records as processed.\n\nThis is a plain SQL UPDATE query string, NOT a structured object.\n\nThe query runs AFTER each record is successfully exported, setting a flag\nto indicate the record has been processed.\n\nFormat: \"UPDATE <table> SET <column>=<value> WHERE <id_column>={{record.<id_field>}}\"\n\nExample: \"UPDATE customers SET exported=true WHERE id={{record.id}}\"\n\nThe {{record.id}} placeholder is replaced with the actual record ID from each exported row.\n"}}}}},"MongoDB":{"type":"object","description":"Configuration object for MongoDB data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a MongoDB connection\nand must not be included for other connection types. It defines how documents are\nretrieved from MongoDB collections for processing in integrations.\n\nMongoDB exports currently support the following operational modes:\n- Retrieves documents from specified collections\n- Filters documents based on query criteria\n- Selects specific fields with projections\n- Provides NoSQL flexibility with JSON query syntax\n","properties":{"method":{"type":"string","enum":["find"],"description":"Specifies the MongoDB operation to perform when retrieving data.\n\n**Field behavior**\n\nThis field defines the query approach:\n\n- REQUIRED for all MongoDB exports\n- Currently only supports \"find\" operations\n- Determines how other parameters are interpreted\n- Corresponds to MongoDB's db.collection.find() method\n- Future versions may support additional methods\n\n**Query method types**\n\n**Find Method**\n```\n\"method\": \"find\"\n```\n\n- **Behavior**: Retrieves documents from a collection based on filter criteria\n- **MongoDB Equivalent**: db.collection.find(filter, projection)\n- **Required Parameters**: collection\n- **Optional Parameters**: filter, projection\n- **Use Cases**: Standard document retrieval, filtered queries, field selection\n\n**Technical considerations**\n\nThe method selection influences:\n- What other fields must be provided\n- How the query will be executed against MongoDB\n- What indexing strategies should be applied\n- Performance characteristics of the operation\n\nIMPORTANT: While only \"find\" is currently supported, the schema is designed\nfor future expansion to include other MongoDB operations like \"aggregate\"\nfor more complex data transformations and aggregations.\n"},"collection":{"type":"string","description":"Specifies the MongoDB collection to query for documents.\n\n**Field behavior**\n\nThis field identifies the data source:\n\n- REQUIRED for all MongoDB exports\n- Must reference a valid collection in the MongoDB database\n- Case-sensitive according to MongoDB collection naming\n- The primary container for documents to be retrieved\n"},"filter":{"type":"string","description":"Defines query criteria for selecting documents from the collection.\n\n**Field behavior**\n\nThis field narrows document selection:\n\n- OPTIONAL: If omitted, all documents in the collection are returned\n- Contains a MongoDB query document as a JSON string\n- Supports all standard MongoDB query operators\n- Provides precise control over which documents are retrieved\n\n**Query patterns**\n\n**Simple Equality Query**\n```\n\"filter\": \"{\"status\": \"active\"}\"\n```\n\n- **Behavior**: Returns only documents where status equals \"active\"\n- **MongoDB Equivalent**: db.collection.find({\"status\": \"active\"})\n- **Matching Documents**: {\"_id\": 1, \"status\": \"active\", \"name\": \"Example\"}\n- **Use Cases**: Status filtering, category selection, type filtering\n\n**Comparison Operator Query**\n```\n\"filter\": \"{\"createdDate\": {\"$gt\": \"2023-01-01T00:00:00Z\"}}\"\n```\n\n- **Behavior**: Returns documents created after January 1, 2023\n- **MongoDB Equivalent**: db.collection.find({\"createdDate\": {\"$gt\": \"2023-01-01T00:00:00Z\"}})\n- **Operators**: $eq, $gt, $gte, $lt, $lte, $ne, $in, $nin\n- **Use Cases**: Date ranges, numeric thresholds, incremental processing\n\n**Logical Operator Query**\n```\n\"filter\": \"{\"$or\": [{\"status\": \"pending\"}, {\"status\": \"processing\"}]}\"\n```\n\n- **Behavior**: Returns documents with either pending or processing status\n- **MongoDB Equivalent**: db.collection.find({\"$or\": [{\"status\": \"pending\"}, {\"status\": \"processing\"}]})\n- **Operators**: $and, $or, $nor, $not\n- **Use Cases**: Multiple conditions, alternative criteria, complex filtering\n\n**Nested Document Query**\n```\n\"filter\": \"{\"address.country\": \"USA\"}\"\n```\n\n- **Behavior**: Returns documents where the nested country field equals \"USA\"\n- **MongoDB Equivalent**: db.collection.find({\"address.country\": \"USA\"})\n- **Dot Notation**: Accesses nested document fields\n- **Use Cases**: Nested data filtering, object property matching\n\n**Handlebars Template Query**\n```\n\"filter\": \"{\"customerId\": \"{{record.customer_id}}\", \"status\": \"{{record.status}}\"}\"\n```\n\n- **Behavior**: Dynamically filters based on record field values\n- **MongoDB Equivalent**: db.collection.find({\"customerId\": \"123\", \"status\": \"active\"})\n- **Template Variables**: Values replaced at runtime with actual record data\n- **Use Cases**: Dynamic filtering, context-aware queries, relational lookups\n\n**Incremental Processing Query**\n```\n\"filter\": \"{\"lastModified\": {\"$gt\": \"{{lastRun}}\"}}\"\n```\n\n- **Behavior**: Returns only documents modified since last execution\n- **MongoDB Equivalent**: db.collection.find({\"lastModified\": {\"$gt\": \"2023-06-15T10:30:00Z\"}})\n- **System Variables**: {{lastRun}} replaced with timestamp of previous execution\n- **Use Cases**: Change data capture, delta synchronization, incremental updates\n"},"projection":{"type":"string","description":"Controls which fields are included or excluded from returned documents.\n\n**Field behavior**\n\nThis field optimizes data retrieval:\n\n- OPTIONAL: If omitted, all fields are returned\n- Contains a MongoDB projection document as a JSON string\n- Can include fields (1) or exclude fields (0), but not both (except _id)\n- Helps minimize data transfer by selecting only needed fields\n\n**Projection patterns**\n\n**Field Inclusion Projection**\n```\n\"projection\": \"{\"name\": 1, \"email\": 1, \"_id\": 0}\"\n```\n\n- **Behavior**: Returns only name and email fields, excludes _id\n- **MongoDB Equivalent**: db.collection.find({}, {\"name\": 1, \"email\": 1, \"_id\": 0})\n- **Result Format**: {\"name\": \"Example\", \"email\": \"user@example.com\"}\n- **Use Cases**: Specific field selection, minimizing payload size\n\n**Field Exclusion Projection**\n```\n\"projection\": \"{\"password\": 0, \"internal_notes\": 0}\"\n```\n\n- **Behavior**: Returns all fields except password and internal_notes\n- **MongoDB Equivalent**: db.collection.find({}, {\"password\": 0, \"internal_notes\": 0})\n- **Result Impact**: Removes sensitive or unnecessary fields\n- **Use Cases**: Security filtering, removing large fields, data protection\n\n**Nested Field Projection**\n```\n\"projection\": \"{\"profile.firstName\": 1, \"profile.lastName\": 1, \"orders\": 1, \"_id\": 0}\"\n```\n\n- **Behavior**: Returns only specific nested fields and the orders array\n- **MongoDB Equivalent**: db.collection.find({}, {\"profile.firstName\": 1, \"profile.lastName\": 1, \"orders\": 1, \"_id\": 0})\n- **Dot Notation**: Accesses specific nested document fields\n- **Use Cases**: Partial nested document selection, specific array inclusion\n\n**Technical considerations**\n\n- Maximum size: 128KB\n- Must be a valid JSON string representing a MongoDB projection\n- Cannot mix inclusion and exclusion modes (except _id field)\n- _id field is included by default unless explicitly excluded\n- Projection does not affect which documents are returned, only their fields\n\nIMPORTANT: When working with nested documents or arrays, be aware that including\na specific field path does not automatically include parent documents or arrays.\nFor example, including \"addresses.zipcode\" will only return that specific field,\nnot the entire addresses array or documents within it.\n"}}},"NetSuite":{"type":"object","description":"Configuration object for NetSuite data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a NetSuite connection\nand must not be included for other connection types. It defines how data is extracted\nfrom NetSuite, including saved searches, RESTlets, and distributed/SuiteApp exports.\n\n**Netsuite export modes**\n\nNetSuite exports support several operating modes:\n\n1. **Saved Search Exports** - Uses NetSuite saved searches to retrieve data\n2. **RESTlet Exports** - Uses custom RESTlet scripts for data retrieval\n3. **Distributed Exports** - Uses SuiteApp for real-time or batch processing\n4. **Blob Exports** - Retrieves files from the NetSuite file cabinet and transfers them WITHOUT parsing them into records (raw binary transfer)\n5. **File Exports** - Retrieves files from the NetSuite file cabinet and PARSES them into records (CSV, XML, JSON, etc.)\n\n**Critical:** Blob vs File Export Configuration\n\nThe export `type` field at the top level determines whether file content is parsed:\n\n- **For Blob Exports (no parsing)**: Set the export's `type: \"blob\"` AND configure `netsuite.blob`\n- **For File Exports (with parsing)**: Leave the export's `type` as null/undefined AND configure `netsuite.file`\n\nDo NOT set `type: \"blob\"` when you want file content parsed into records. The \"blob\" type is specifically for raw file transfers without any parsing.\n\n**Implementation requirements**\n\n- For saved search exports: Configure the `searches` or `type` properties\n- For RESTlet exports: Configure the `restlet` property with script details\n- For distributed exports: Configure the `distributed` property\n- For blob exports (no parsing): Set export `type: \"blob\"` and configure `netsuite.blob`\n- For file exports (with parsing): Leave export `type` null and configure `netsuite.file`\n","properties":{"type":{"type":"string","enum":["search","basicSearch","metadata","selectoption","restlet","getList","getServerTime","distributed","file"],"description":"Specifies the NetSuite export operation type. This determines how data is retrieved from NetSuite.\n\n**Critical:** File exports vs Blob exports\n\n- **File exports (with parsing)**: Set netsuite.type to \"file\" and configure netsuite.file.folderInternalId\n- **Blob exports (raw transfer, no parsing)**: Leave netsuite.type BLANK/null, set the export's top-level type to \"blob\", and configure netsuite.internalId\n\nDo NOT set netsuite.type to \"file\" for blob exports. For blob exports, this property should be omitted or null.\n\n**Recommended types**\n\n- **For Lookups (isLookup: true)**:\n    - **PREFER \"restlet\"**: This allows you to use `suiteapp2.0` saved searches with dynamic inputs easily.\n    - **AVOID \"search\"**: Standard search type is often limited for dynamic lookups.\n\n**Valid values**\n- \"search\" - Use a saved search to retrieve records\n- \"basicSearch\" - Use a basic search query\n- \"metadata\" - Retrieve record metadata\n- \"selectoption\" - Retrieve select options for a field\n- \"restlet\" - Use a RESTlet for custom data retrieval\n- \"getList\" - Retrieve a list of records by internal IDs\n- \"getServerTime\" - Get the NetSuite server time\n- \"distributed\" - Use distributed/SuiteApp for real-time exports\n- \"file\" - Export files from NetSuite file cabinet WITH parsing into records\n- null/omitted - For blob exports or other export types\n\n**Implementation guidance**\n- For file exports WITH parsing: Set netsuite.type to \"file\" and configure netsuite.file.folderInternalId\n- For blob exports (no parsing): Leave netsuite.type blank, set export type to \"blob\", configure netsuite.internalId\n- For saved search exports: Set type to \"search\" and configure netsuite.searches\n- For RESTlet exports: Set type to \"restlet\" and configure netsuite.restlet\n- For distributed/real-time exports: Set type to \"distributed\" and configure netsuite.distributed\n\n**Examples**\n- \"file\" - For file cabinet exports with parsing\n- \"search\" - For saved search exports\n- null - For blob exports (raw file transfer without parsing)"},"searches":{"type":"array","description":"An array of search configurations used to query and retrieve data from NetSuite.\nEach search object defines a saved search or ad-hoc query configuration.\n\n**Structure**\nEach item in the array is an object with the following properties:\n- savedSearchId: The internal ID of a saved search in NetSuite (string)\n- recordType: The NetSuite record type being searched (string, e.g., \"customer\", \"salesorder\")\n- criteria: Array of search criteria/filters (optional)\n\n**Examples**\n```json\n[\n  {\n    \"savedSearchId\": \"10\",\n    \"recordType\": \"customer\",\n    \"criteria\": []\n  }\n]\n```\n\n**Implementation guidance**\n- Use savedSearchId to reference an existing saved search in NetSuite\n- recordType should match a valid NetSuite record type\n- criteria can be used to add additional filters to the search","items":{"type":"object","properties":{"savedSearchId":{"type":"string","description":"The internal ID of a saved search in NetSuite"},"recordType":{"type":"string","description":"The lowercase script ID of the NetSuite record type being searched.\n\nMust be the exact lowercase script ID as defined in NetSuite (e.g., \"customer\", \"salesorder\", \"invoice\", \"vendorbill\").\nThis is NOT the display name - use the script ID which is always lowercase with no spaces."},"criteria":{"type":"array","description":"Array of search criteria/filters to apply","items":{"type":"object"}}}}},"metadata":{"type":"object","description":"metadata: >\n  A collection of key-value pairs that provide additional contextual information about the NetSuite entity. This metadata can include custom attributes, tags, or any supplementary data that helps to describe, categorize, or operationally enhance the entity beyond its standard properties. It serves as an extensible mechanism to store user-defined or system-generated information that is not part of the core entity schema, enabling greater flexibility and customization in managing NetSuite data.\n  **Field behavior**\n  - Stores arbitrary additional information related to the NetSuite entity, enhancing its descriptive or operational context.\n  - Can include custom fields defined by users, system-generated tags, flags, timestamps, or nested structured data.\n  - Typically represented as a dictionary or map with string keys and values that may be strings, numbers, booleans, arrays, or nested objects.\n  - Metadata entries are optional and do not affect the core entity behavior unless explicitly integrated with business logic.\n  - Supports dynamic addition, update, and removal of metadata entries without impacting the primary entity schema.\n  **Implementation guidance**\n  - Ensure all metadata keys are unique within the collection to prevent accidental overwrites.\n  - Support flexible and heterogeneous value types, including primitive types and nested structures, to accommodate diverse metadata needs.\n  - Validate keys and values against naming conventions, length restrictions, and allowed character sets to maintain consistency and prevent errors.\n  - Implement efficient mechanisms for CRUD (Create, Read, Update, Delete) operations on metadata entries to facilitate easy management.\n  - Consider indexing frequently queried metadata keys for performance optimization.\n  - Provide clear documentation or schema definitions for any standardized or commonly used metadata keys.\n  **Examples**\n  - {\"department\": \"Sales\", \"region\": \"EMEA\", \"priority\": \"high\"}\n  - {\"customField1\": \"value1\", \"customFlag\": true}\n  - {\"tags\": [\"urgent\", \"review\"], \"lastUpdatedBy\": \"user123\"}\n  - {\"approvalStatus\": \"pending\", \"reviewCount\": 3, \"metadataVersion\": 2}\n  - {\"nestedInfo\": {\"createdBy\": \"admin\", \"createdAt\": \"2024-05-01T12:00:00Z\"}}\n  **Important notes**\n  - Metadata is optional and should not interfere with the core functionality or validation of the NetSuite entity.\n  - Modifications to metadata typically do not trigger business workflows or logic unless explicitly configured to do so."},"selectoption":{"type":"object","description":"selectoption: >\n  Represents a selectable option within a NetSuite field, typically used in dropdown menus, radio buttons, or other selection controls. Each selectoption consists of a user-friendly label and an associated value that uniquely identifies the option internally. This structure enables consistent data entry, filtering, and categorization within NetSuite forms and records.\n  **Field behavior**\n  - Defines a single, discrete choice available to users in selection interfaces such as dropdowns, radio buttons, or multi-select lists.\n  - Can be part of a collection of options presented to the user for making a selection.\n  - Includes both a display label (visible to users) and a corresponding value (used internally or in API interactions).\n  - Supports filtering, categorization, and conditional logic based on the selected option.\n  - May be dynamically generated or statically defined depending on the field configuration.\n  **Implementation guidance**\n  - Assign a unique and stable value to each selectoption to prevent ambiguity and maintain data integrity.\n  - Use clear, concise, and user-friendly labels that accurately describe the option’s meaning.\n  - Validate option values against expected data types and formats to ensure compatibility with backend processing.\n  - Implement localization strategies for labels to support multiple languages without altering the underlying values.\n  - Consistently apply selectoption structures across all fields requiring predefined choices to standardize user experience.\n  - Consider accessibility best practices when designing labels and selection controls.\n  **Examples**\n  - { label: \"Active\", value: \"1\" }\n  - { label: \"Inactive\", value: \"2\" }\n  - { label: \"Pending Approval\", value: \"3\" }\n  - { label: \"High Priority\", value: \"high\" }\n  - { label: \"Low Priority\", value: \"low\" }\n  **Important notes**\n  - The label is intended for display purposes and may be localized; the value is the definitive identifier used in data processing and API calls.\n  - Values should remain consistent over time to avoid breaking integrations or corrupting data.\n  - When supporting multiple languages, labels should be translated appropriately while keeping values unchanged.\n  - Changes to selectoption values or labels should be managed carefully to prevent unintended side effects.\n  - Selectoption entries may be influenced by the context of the parent record, user roles, or permissions.\n  **Dependency chain**\n  - Utilized within field definitions that support selection inputs (e"},"customFieldMetadata":{"type":"object","description":"customFieldMetadata: Metadata information related to custom fields defined within the NetSuite environment, providing comprehensive details about each custom field's configuration, behavior, and constraints to facilitate accurate data handling and UI generation.\n**Field behavior**\n- Contains detailed metadata about custom fields, including their definitions, types, configurations, and constraints.\n- Provides contextual information necessary for understanding, validating, and manipulating custom fields programmatically.\n- May include attributes such as field ID, label, data type, default values, validation rules, display settings, sourcing information, and field dependencies.\n- Used to dynamically interpret or generate UI elements, data validation logic, or data structures based on custom field configurations.\n- Reflects the current state of custom fields as defined in the NetSuite account, enabling synchronization between the API consumer and the NetSuite environment.\n**Implementation guidance**\n- Ensure that the metadata accurately reflects the current state of custom fields in the NetSuite account by synchronizing regularly or on configuration changes.\n- Update the metadata whenever custom fields are added, modified, or removed to maintain consistency and prevent data integrity issues.\n- Use this metadata to validate input data against custom field constraints (e.g., data type, required status, allowed values) before processing or submission.\n- Consider caching metadata for performance optimization but implement mechanisms to refresh it periodically or on-demand to capture updates.\n- Handle cases where customFieldMetadata might be null, incomplete, or partially loaded gracefully, including fallback logic or error handling.\n- Respect user permissions and access controls when retrieving or exposing custom field metadata to ensure compliance with security policies.\n**Examples**\n- A custom field metadata object describing a custom checkbox field with ID \"custfield_123\", label \"Approved\", default value false, and display type \"inline\".\n- Metadata for a custom list/record field specifying the list of valid options, their internal IDs, and whether multiple selections are allowed.\n- Information about a custom date field including its date format, minimum and maximum allowed dates, and any validation rules applied.\n- Metadata describing a custom currency field with precision settings and default currency.\n**Important notes**\n- The structure and content of customFieldMetadata may vary depending on the NetSuite configuration, customizations, and API version.\n- Access to custom field metadata may require appropriate permissions within the NetSuite environment; unauthorized access may result in incomplete or no metadata.\n- Changes to custom fields in NetSuite (such as renaming, deleting, or changing data types) can impact the metadata and"},"skipGrouping":{"type":"boolean","description":"skipGrouping: Indicates whether to bypass the grouping of related records or transactions during processing, allowing each item to be handled individually rather than aggregated into groups.\n\n**Field behavior**\n- When set to true, the system processes each record or transaction independently, without combining them into groups based on shared attributes.\n- When set to false or omitted, related records or transactions are aggregated according to predefined grouping criteria (e.g., by customer, date, or transaction type) before processing.\n- Influences how data is structured, summarized, and reported in outputs or passed to downstream systems.\n- Affects the level of detail and granularity available in the processed data.\n\n**Implementation guidance**\n- Utilize this flag to control processing granularity, especially when detailed, record-level analysis or reporting is required.\n- Confirm that downstream systems, reports, or integrations can accommodate ungrouped data if skipGrouping is enabled.\n- Assess the potential impact on system performance and data volume, as disabling grouping may significantly increase the number of processed items.\n- Consider the use case carefully: grouping is generally preferred for summary reports, while skipping grouping suits detailed audits or troubleshooting.\n\n**Examples**\n- skipGrouping: true — Processes each transaction separately, providing detailed, unaggregated data.\n- skipGrouping: false — Groups transactions by customer or date, producing summarized results.\n- skipGrouping omitted — Defaults to grouping enabled, aggregating related records.\n\n**Important notes**\n- Enabling skipGrouping can lead to increased processing time, higher memory usage, and larger output datasets.\n- Some reports, dashboards, or integrations may require grouped data; verify compatibility before enabling this option.\n- The default behavior is typically grouping enabled (skipGrouping = false) unless explicitly overridden.\n- Changes to this setting may affect data consistency and comparability with previously generated reports.\n\n**Dependency chain**\n- Often depends on other properties that define grouping keys or criteria (e.g., groupBy fields).\n- May interact with filtering, sorting, or pagination settings within the processing pipeline.\n- Could influence or be influenced by aggregation functions or summary calculations applied downstream.\n\n**Technical details**\n- Data type: Boolean.\n- Default value: false (grouping enabled).\n- Implemented as a conditional flag checked during the data aggregation phase.\n- When true, bypasses aggregation logic and processes each record individually.\n- Typically integrated into the processing workflow to toggle between grouped and ungrouped data handling."},"statsOnly":{"type":"boolean","description":"statsOnly indicates whether the API response should include only aggregated statistical summary data without any detailed individual records. This property is used to optimize response size and improve performance when detailed data is unnecessary.\n\n**Field behavior**\n- When set to true, the API returns only summary statistics such as counts, averages, sums, or other aggregate metrics.\n- When set to false or omitted, the response includes both detailed data records and the associated statistical summaries.\n- Helps reduce network bandwidth and processing time by excluding verbose record-level data.\n- Primarily intended for use cases like dashboards, reports, or monitoring tools where only high-level metrics are required.\n\n**Implementation guidance**\n- Default the value to false to ensure full data retrieval unless explicitly requesting summary-only data.\n- Validate that the input is a boolean to prevent unexpected API behavior.\n- Use this flag selectively in scenarios where detailed records are not needed to avoid loss of critical information.\n- Ensure the API endpoint supports this flag before usage, as some endpoints may not implement statsOnly functionality.\n- Adjust client-side logic to handle different response structures depending on the flag’s value.\n\n**Examples**\n- statsOnly: true — returns only aggregated statistics such as total counts, averages, or sums without any detailed entries.\n- statsOnly: false — returns full detailed data records along with statistical summaries.\n- statsOnly omitted — defaults to false, returning detailed data and statistics.\n\n**Important notes**\n- Enabling statsOnly disables access to individual record details, which may limit in-depth data analysis.\n- The response schema changes significantly when statsOnly is true; clients must handle these differences gracefully.\n- Some API endpoints may not support this property; verify compatibility in the API documentation.\n- Pagination parameters may be ignored or behave differently when statsOnly is enabled, since detailed records are excluded.\n\n**Dependency chain**\n- May interact with filtering, sorting, or date range parameters that influence the statistical data returned.\n- Can affect pagination logic because detailed records are omitted when statsOnly is true.\n- Dependent on the API endpoint’s support for summary-only responses.\n\n**Technical details**\n- Data type: Boolean.\n- Default value: false.\n- Typically implemented as a query parameter or part of the request payload depending on API design.\n- Alters the response payload structure by excluding detailed record arrays and including only aggregated metrics.\n- Helps optimize API performance and reduce response payload size in scenarios where detailed data is unnecessary."},"internalId":{"type":"string","description":"The internal ID of a specific file in the NetSuite file cabinet to export.\n\n**Critical:** Required for blob exports\n\nThis property is REQUIRED when the export type is \"blob\". For blob exports, you must specify the internalId of the file to export from the NetSuite file cabinet.\n\n**Field behavior**\n- Identifies a specific file in the NetSuite file cabinet by its internal ID\n- Required for blob exports (raw binary file transfers without parsing)\n- The file at this internal ID will be exported as-is without parsing\n\n**Implementation guidance**\n- For blob exports: Set netsuite.type to \"blob\" (on the export, not netsuite) and provide netsuite.internalId\n- Obtain the file's internalId from NetSuite's file cabinet or via API\n- Validate the internalId corresponds to an existing file before export\n\n**Examples**\n- \"12345\" - Internal ID of a specific file\n- \"67890\" - Another file internal ID\n\n**Important notes**\n- This is different from netsuite.file.folderInternalId which specifies a folder for file exports with parsing\n- For blob exports: Use netsuite.internalId (file ID)\n- For file exports with parsing: Use netsuite.file.folderInternalId (folder ID)"},"blob":{"type":"object","properties":{"purgeFileAfterExport":{"type":"boolean","description":"purgeFileAfterExport: Whether to delete the file from the system after it has been successfully exported. This property controls the automatic removal of the source file post-export to help manage storage and maintain system cleanliness.\n\n**Field behavior**\n- Determines if the exported file should be removed from the source storage immediately after a successful export operation.\n- When set to `true`, the system deletes the file as soon as the export completes without errors.\n- When set to `false` or omitted, the file remains intact in its original location after export.\n- Facilitates automated cleanup of files to prevent unnecessary storage consumption.\n- Does not affect the export process itself; deletion occurs only after confirming export success.\n\n**Implementation guidance**\n- Confirm that the export operation has fully completed and succeeded before initiating file deletion to avoid data loss.\n- Verify that the executing user or system has sufficient permissions to delete files from the source location.\n- Assess downstream workflows or processes that might require access to the file after export before enabling purging.\n- Implement logging or notification mechanisms to record when files are purged for audit trails and troubleshooting.\n- Consider integrating with retention policies or backup systems to prevent accidental loss of important data.\n\n**Examples**\n- `true` — The file will be deleted immediately after a successful export.\n- `false` — The file will remain in the source location after export.\n- Omitted or `null` — Defaults to `false`, meaning the file is retained post-export.\n\n**Important notes**\n- File deletion is permanent and cannot be undone; ensure that the file is no longer needed before enabling this option.\n- Use caution in multi-user or multi-process environments where files may be shared or required beyond the export operation.\n- Immediate purging may interfere with backup, archival, or compliance requirements if files are deleted too soon.\n- Consider implementing safeguards or confirmation steps if enabling automatic purging in production environments.\n\n**Dependency chain**\n- Relies on the successful completion of the export operation to trigger file deletion.\n- Dependent on file system permissions and access controls to allow deletion.\n- May be affected by other system settings related to file retention, archival, or cleanup policies.\n- Could interact with error handling mechanisms to prevent deletion if export fails or is incomplete.\n\n**Technical details**\n- Typically represented as a boolean value (`true` or `false`).\n- Default behavior is to retain files unless explicitly set to `true`.\n- Deletion should be performed using secure"}},"description":"blob: Configuration for retrieving raw binary files from NetSuite file cabinet WITHOUT parsing them into records. Use this for binary file transfers (images, PDFs, executables) where the file content should be transferred as-is.\n\n**Critical:** Blob export configuration\n\nFor blob exports, configure:\n1. Set the export's top-level `type` to \"blob\"\n2. Set `netsuite.internalId` to the file's internal ID\n3. Leave `netsuite.type` blank/null (do NOT set it to \"file\")\n4. Optionally configure `netsuite.blob.purgeFileAfterExport`\n\n**When to use blob vs file**\n- Blob exports: Raw binary transfer WITHOUT parsing - leave netsuite.type blank\n- File exports: Parse file contents into records - set netsuite.type to \"file\"\n\nDo NOT use blob configuration when you want file content parsed into data records.\n\n**Field behavior**\n- Stores raw binary data including files, images, audio, video, or any non-textual content.\n- Supports download operations for binary content from the NetSuite file cabinet.\n- File content is transferred as-is without any parsing or transformation.\n- May be immutable or mutable depending on the specific NetSuite entity and operation.\n- Requires careful handling to maintain data integrity during transmission and storage.\n\n**Implementation guidance**\n- Always encode binary data (e.g., using base64) when transmitting over text-based protocols such as JSON or XML to ensure data integrity.\n- Validate the size of the blob against NetSuite API limits and storage constraints to prevent errors or truncation.\n- Implement secure handling practices, including encryption in transit and at rest, to protect sensitive binary data.\n- Use appropriate MIME/content-type headers when uploading or downloading blobs to correctly identify the data format.\n- Consider chunked uploads/downloads or streaming for large blobs to optimize performance and resource usage.\n- Ensure consistent encoding and decoding mechanisms between client and server to avoid data corruption.\n\n**Examples**\n- A base64-encoded PDF document attached to a NetSuite customer record.\n- An image file (PNG or JPEG) stored as a blob for product catalog entries.\n- A binary export of transaction data in a proprietary format used for integration with external systems.\n- Audio or video files associated with marketing campaigns or training materials.\n- Encrypted binary blobs containing sensitive configuration or credential data.\n\n**Important notes**\n- Blob size may be limited by NetSuite API constraints or underlying storage capabilities; exceeding these limits can cause failures.\n- Encoding and decoding must be consistent and correctly implemented to prevent data corruption or loss.\n- Large blobs should be handled using chunked or streamed transfers to avoid memory issues and improve reliability.\n- Security is paramount; blobs may contain sensitive information requiring encryption and strict access controls.\n- Access to blob data typically requires proper authentication and authorization aligned with NetSuite’s security model.\n\n**Dependency chain**\n- Dependent on authentication and authorization mechanisms"},"restlet":{"type":"object","properties":{"recordType":{"type":"string","description":"recordType specifies the type of NetSuite record that the RESTlet will interact with. This property determines the schema, validation rules, and operations applicable to the record within the NetSuite environment, directly influencing how data is processed and managed by the RESTlet.\n\n**Field behavior**\n- Defines the specific NetSuite record type (e.g., customer, salesOrder, invoice) targeted by the RESTlet.\n- Influences the structure and format of data payloads sent to and received from the RESTlet.\n- Controls validation rules, mandatory fields, and available operations based on the selected record type.\n- Affects permissions and access controls enforced during RESTlet execution, ensuring compliance with NetSuite security settings.\n- Determines the applicable business logic and workflows triggered by the RESTlet for the specified record type.\n\n**Implementation guidance**\n- Use the exact internal ID or script ID of the NetSuite record type as recognized by the NetSuite system to ensure accurate targeting.\n- Validate the recordType value against the list of supported NetSuite record types to prevent runtime errors and ensure compatibility.\n- Confirm that the RESTlet script has the necessary permissions and roles assigned to access and manipulate the specified record type.\n- When handling multiple record types dynamically, implement conditional logic to accommodate differences in data structure and processing requirements.\n- For custom record types, always use the script ID format (e.g., \"customrecord_myCustomRecord\") to avoid ambiguity.\n- Test RESTlet behavior thoroughly after changing the recordType to verify correct handling of data and operations.\n\n**Examples**\n- \"customer\"\n- \"salesOrder\"\n- \"invoice\"\n- \"employee\"\n- \"customrecord_myCustomRecord\"\n- \"vendor\"\n- \"purchaseOrder\"\n\n**Important notes**\n- The recordType must correspond to a valid and supported NetSuite record type; invalid values will cause API calls to fail.\n- Custom record types require referencing by their script IDs, which typically start with \"customrecord_\".\n- Modifying the recordType may necessitate updates to the RESTlet’s codebase to handle different data schemas and business logic.\n- Permissions and role restrictions in NetSuite can limit access to certain record types, impacting RESTlet functionality.\n- Consistency in recordType usage is critical for maintaining data integrity and predictable RESTlet behavior.\n\n**Dependency chain**\n- Depends on the NetSuite environment’s available record types and their configurations.\n- Influences the RESTlet’s data validation, processing logic, and response formatting.\n-"},"searchId":{"type":"string","description":"searchId: The unique identifier for a saved search in NetSuite, used to specify which saved search the RESTlet should execute. This ID corresponds to the internal ID assigned to saved searches within the NetSuite system. It enables the RESTlet to run predefined queries and retrieve data based on the saved search’s criteria and configuration.\n\n**Field behavior**\n- Specifies the exact saved search to be executed by the RESTlet.\n- Must correspond to a valid and existing saved search internal ID within the NetSuite account.\n- Determines the dataset and filters applied when retrieving search results.\n- Typically required when invoking the RESTlet to perform search operations.\n- Influences the structure and content of the response based on the saved search definition.\n\n**Implementation guidance**\n- Verify that the searchId matches an existing saved search internal ID in the target NetSuite environment.\n- Validate the searchId format and existence before making the RESTlet call to prevent runtime errors.\n- Use the internal ID as a string or numeric value consistent with NetSuite’s conventions.\n- Implement error handling for scenarios where the searchId is invalid, missing, or inaccessible due to permission restrictions.\n- Ensure the integration role or user has appropriate permissions to access and execute the saved search.\n- Consider caching or documenting frequently used searchIds to improve maintainability.\n\n**Examples**\n- \"1234\" — a numeric internal ID representing a specific saved search.\n- \"5678\" — another valid saved search internal ID.\n- \"1001\" — an example of a saved search ID used to retrieve customer records.\n- \"2002\" — a saved search ID configured to return transaction data.\n\n**Important notes**\n- The searchId must be accessible by the user or integration role making the RESTlet call; otherwise, access will be denied.\n- Providing an incorrect or non-existent searchId will result in errors or empty search results.\n- Permissions and sharing settings on the saved search directly affect the data returned by the RESTlet.\n- The saved search must be properly configured with the desired filters, columns, and criteria to ensure meaningful results.\n- Changes to the saved search (e.g., modifying filters or columns) will impact the RESTlet output without changing the searchId.\n\n**Dependency chain**\n- Depends on the existence of a saved search configured in the NetSuite account.\n- Requires appropriate user or integration role permissions to access the saved search.\n- Relies on the saved search’s configuration (filters, columns, criteria) to"},"useSS2Restlets":{"type":"boolean","description":"useSS2Restlets: >\n  Specifies whether to use SuiteScript 2.0 RESTlets for API interactions instead of SuiteScript 1.0 RESTlets. This setting controls the version of RESTlets invoked during API communication with NetSuite, impacting compatibility, performance, and available features.\n  **Field behavior**\n  - Determines the RESTlet version used for all API interactions within the NetSuite integration.\n  - When set to `true`, the system exclusively uses SuiteScript 2.0 RESTlets.\n  - When set to `false` or omitted, SuiteScript 1.0 RESTlets are used by default.\n  - Influences the structure, capabilities, and response formats of API calls.\n  **Implementation guidance**\n  - Enable this flag to take advantage of SuiteScript 2.0’s improved modularity, asynchronous capabilities, and modern JavaScript syntax.\n  - Verify that SuiteScript 2.0 RESTlets are properly deployed, configured, and accessible in the target NetSuite environment before enabling.\n  - Conduct comprehensive testing to ensure existing integrations and workflows remain functional when switching from SuiteScript 1.0 to 2.0 RESTlets.\n  - Coordinate with NetSuite administrators and developers to update or rewrite RESTlets if necessary.\n  **Examples**\n  - `true` — API calls will utilize SuiteScript 2.0 RESTlets, enabling modern scripting features.\n  - `false` — API calls will continue using legacy SuiteScript 1.0 RESTlets for backward compatibility.\n  **Important notes**\n  - SuiteScript 2.0 RESTlets support modular script architecture and ES6+ JavaScript features, improving maintainability and performance.\n  - Legacy RESTlets written in SuiteScript 1.0 may not be compatible with SuiteScript 2.0; migration or parallel support might be required.\n  - Switching RESTlet versions can change API response formats and behaviors, potentially impacting downstream systems.\n  - Ensure proper version control and rollback plans are in place when changing this setting.\n  **Dependency chain**\n  - Depends on the deployment and availability of SuiteScript 2.0 RESTlets within the NetSuite account.\n  - Requires that the API client and integration logic support the RESTlet version selected.\n  - May depend on other configuration settings related to authentication and script permissions.\n  **Technical details**\n  - SuiteScript 2.0 RESTlets use the AMD module format and support"},"restletVersion":{"type":"object","properties":{"type":{"type":"string","description":"The type property specifies the version type of the NetSuite Restlet being used. It determines the specific version or variant of the Restlet API that the integration will interact with, ensuring compatibility and correct functionality. This property is essential for correctly routing requests, handling responses, and maintaining alignment with the expected API contract for the chosen Restlet version.\n\n**Field behavior**\n- Defines the version category or variant of the NetSuite Restlet API.\n- Influences request formatting, response parsing, and available features.\n- Determines which API endpoints and methods are accessible.\n- May impact authentication mechanisms and data serialization formats.\n- Ensures that the integration communicates with the correct Restlet version to prevent incompatibility issues.\n\n**Implementation guidance**\n- Use only predefined and officially supported Restlet version types provided by NetSuite.\n- Validate the type value against the current list of supported Restlet versions before deployment.\n- Update the type property when upgrading to a newer Restlet version or switching to a different variant.\n- Include the type property within the restletVersion object to explicitly specify the API version.\n- Coordinate changes to this property with client applications and integration workflows to maintain compatibility.\n- Monitor NetSuite release notes and documentation for any changes or deprecations related to Restlet versions.\n\n**Examples**\n- \"1.0\" — specifying the stable Restlet API version 1.0.\n- \"2.0\" — specifying the newer Restlet API version 2.0 with enhanced features.\n- \"beta\" — indicating a beta or experimental Restlet version for testing purposes.\n- \"custom\" — representing a custom or extended Restlet version tailored for specific use cases.\n\n**Important notes**\n- Providing an incorrect or unsupported type value can cause API calls to fail or behave unpredictably.\n- The type must be consistent with the NetSuite environment configuration and deployment settings.\n- Changing the type may necessitate updates in client-side code, authentication flows, and data handling logic.\n- Always consult the latest official NetSuite documentation to verify supported Restlet versions and their characteristics.\n- The type property is critical for maintaining long-term integration stability and compatibility.\n\n**Dependency chain**\n- Depends on the restlet object to define the overall Restlet configuration.\n- Influences other properties related to authentication, endpoint URLs, and data formats within the restletVersion context.\n- May affect downstream processing components that rely on version-specific behaviors.\n\n**Technical details**\n- Typically represented as a string value"},"enum":{"type":"array","items":{"type":"object"},"description":"A list of predefined string values that the `restletVersion` property can accept, representing the supported versions of the NetSuite RESTlet API. This enumeration restricts the input to specific allowed versions to ensure compatibility, prevent invalid version assignments, and facilitate validation and user interface enhancements.\n\n**Field behavior**\n- Defines the complete set of valid version identifiers for the `restletVersion` property.\n- Ensures that only officially supported RESTlet API versions can be selected or submitted.\n- Enables validation mechanisms to reject unsupported or malformed version inputs.\n- Supports auto-completion and dropdown selections in user interfaces and API clients.\n- Helps maintain consistency and compatibility across different API integrations and deployments.\n\n**Implementation guidance**\n- Populate the enum with all currently supported NetSuite RESTlet API versions as defined by official documentation.\n- Regularly update the enum values to reflect newly released versions or deprecated ones.\n- Use clear and consistent string formats that match the official versioning scheme (e.g., semantic versions like \"1.0\", \"2.0\" or date-based versions like \"2023.1\").\n- Implement strict validation logic to reject any input not included in the enum.\n- Consider backward compatibility when adding or removing enum values to avoid breaking existing integrations.\n\n**Examples**\n- [\"1.0\", \"2.0\", \"2.1\"]\n- [\"2023.1\", \"2023.2\", \"2024.1\"]\n- [\"v1\", \"v2\", \"v3\"] (if applicable based on versioning scheme)\n\n**Important notes**\n- Enum values must strictly align with the official NetSuite RESTlet API versioning scheme to ensure correctness.\n- Using a version value outside this enum should trigger a validation error and prevent API calls or configuration saves.\n- The enum acts as a safeguard against runtime errors caused by unsupported or invalid version usage.\n- Changes to this enum should be communicated clearly to all API consumers to manage version compatibility.\n\n**Dependency chain**\n- This enum is directly associated with and constrains the `restletVersion` property.\n- Updates to supported RESTlet API versions necessitate corresponding updates to this enum.\n- Validation logic and UI components rely on this enum to enforce version correctness.\n- Downstream processes that depend on the `restletVersion` value are indirectly dependent on this enum’s accuracy.\n\n**Technical details**\n- Implemented as a string enumeration type within the API schema.\n- Used by validation middleware or schema"},"lowercase":{"type":"object","description":"lowercase: Specifies whether the restlet version string should be converted to lowercase characters to ensure consistent formatting.\n\n**Field behavior**\n- Determines if the restlet version identifier is transformed entirely to lowercase characters.\n- When set to true, the version string is converted to lowercase before any further processing or output.\n- When set to false or omitted, the version string retains its original casing as provided.\n- Affects only the textual representation of the version string, not its underlying value or meaning.\n\n**Implementation guidance**\n- Use this property to enforce uniform casing for version strings, particularly when interacting with case-sensitive systems or APIs.\n- Validate that the value assigned is a boolean (true or false).\n- Define a default behavior (commonly false) when the property is not explicitly set.\n- Apply the lowercase transformation early in the processing pipeline to maintain consistency.\n- Ensure that downstream components respect the transformed casing if this property is enabled.\n\n**Examples**\n- `true` — The version string \"V1.0\" is converted and output as \"v1.0\".\n- `false` — The version string \"V1.0\" remains unchanged as \"V1.0\".\n- Property omitted — The version string casing remains as originally provided.\n\n**Important notes**\n- Altering the casing of the version string may impact integrations with external systems that are case-sensitive.\n- Confirm compatibility with all consumers of the version string before enabling this property.\n- This property does not modify the semantic meaning or version number, only its textual case.\n- Should be used consistently across all instances where the version string is handled to avoid discrepancies.\n\n**Dependency chain**\n- Requires a valid restlet version string to perform the lowercase transformation.\n- May be used in conjunction with other formatting or validation properties related to the restlet version.\n- The effect of this property should be considered when performing version comparisons or logging.\n\n**Technical details**\n- Implemented as a boolean flag controlling whether to apply a lowercase function to the version string.\n- The transformation typically involves invoking a standard string lowercase method/function.\n- Should be executed before any version string comparisons, storage, or output operations.\n- Does not affect the internal representation of the version beyond its string casing."}},"description":"restletVersion specifies the version of the NetSuite Restlet script to be used for the API call. This property determines which version of the Restlet script is invoked, ensuring compatibility and proper execution of the request. It allows precise control over which iteration of the Restlet logic is executed, facilitating version management and smooth transitions between script updates.\n\n**Field behavior**\n- Defines the specific version of the Restlet script to target for the API request.\n- Influences the behavior, output, and compatibility of the API response based on the selected script version.\n- Enables management of multiple Restlet script versions within the same NetSuite environment.\n- Ensures that the API call executes the intended logic corresponding to the specified version.\n- Helps prevent conflicts or errors arising from script changes or updates.\n\n**Implementation guidance**\n- Set this property to exactly match the version identifier of the deployed Restlet script in NetSuite.\n- Confirm that the specified version is properly deployed and active in the NetSuite account before use.\n- Adopt a clear and consistent versioning scheme (e.g., semantic versioning, date-based, or custom tags) to avoid ambiguity.\n- Update this property whenever switching to a newer or different Restlet script version to reflect the intended logic.\n- Validate the version string format to prevent malformed or unsupported values.\n- Coordinate version updates with deployment and testing processes to ensure smooth transitions.\n\n**Examples**\n- \"1.0\"\n- \"2.1\"\n- \"2023.1\"\n- \"v3\"\n- \"release-2024-06\"\n\n**Important notes**\n- Specifying an incorrect or non-existent version will cause the API call to fail or produce unexpected results.\n- Proper versioning supports backward compatibility and controlled feature rollouts.\n- Always verify the Restlet script version in the NetSuite environment before making API calls.\n- Version mismatches can lead to errors, data inconsistencies, or unsupported operations.\n- This property is critical for environments where multiple Restlet versions coexist.\n\n**Dependency chain**\n- Depends on the Restlet scripts deployed and versioned within the NetSuite environment.\n- Related to the authentication and authorization context of the API call, as permissions may vary by script version.\n- Works in conjunction with other NetSuite API properties such as scriptId and deploymentId to fully identify the target Restlet.\n- May interact with environment-specific configurations or feature flags tied to particular versions.\n\n**Technical details**\n- Typically represented as a string"},"criteria":{"type":"object","properties":{"type":{"type":"array","items":{"type":"object"},"description":"The type property specifies the category or classification of the criteria used within the NetSuite RESTlet API. It defines the nature or kind of the criteria being applied to filter or query data, enabling precise targeting of records based on their domain or entity type.\n\n**Field behavior**\n- Determines the specific category or classification of the criteria.\n- Influences how the criteria is interpreted and processed by the API.\n- Helps in filtering or querying data based on the defined type.\n- Typically expects a predefined set of values corresponding to valid criteria types.\n- Acts as a key discriminator that guides the API in applying the correct schema and validation rules for the criteria.\n- May affect the available fields and operators applicable within the criteria.\n\n**Implementation guidance**\n- Validate the input against the allowed set of criteria types to ensure correctness and prevent errors.\n- Use consistent, descriptive, and case-sensitive naming conventions for the type values as defined by NetSuite.\n- Ensure that the specified type aligns with the corresponding criteria structure and expected data fields.\n- Document all possible values and their meanings clearly for API consumers to facilitate correct usage.\n- Implement error handling to provide meaningful feedback when unsupported or invalid types are supplied.\n- Keep the list of valid types updated in accordance with changes in NetSuite API versions and account configurations.\n\n**Examples**\n- \"customer\" — to specify criteria related to customer records.\n- \"transaction\" — to filter based on transaction data such as sales orders or invoices.\n- \"item\" — to apply criteria on inventory or product items.\n- \"employee\" — to target employee-related data.\n- \"vendor\" — to filter vendor or supplier records.\n- \"customrecord_xyz\" — to specify criteria for a custom record type identified by its script ID.\n\n**Important notes**\n- The type value directly affects the behavior of the criteria and the resulting data set.\n- Incorrect or unsupported type values may lead to API errors, empty results, or unexpected behavior.\n- The set of valid types may vary depending on the NetSuite account configuration, customizations, and API version.\n- Always refer to the latest official NetSuite API documentation and your account’s schema for supported types.\n- Some types may require additional permissions or roles to access the corresponding data.\n- The type property is mandatory for criteria filtering to function correctly.\n\n**Dependency chain**\n- Depends on the overall criteria object structure within NetSuite.restlet.criteria.\n- Influences the available fields, operators, and values within the criteria definition"},"join":{"type":"string","description":"join: Specifies the criteria used to join related records or tables in a NetSuite RESTlet query, enabling the retrieval of data based on relationships between different record types.\n**Field behavior**\n- Defines the relationship or link between the primary record and a related record for filtering or data retrieval.\n- Determines how records from different tables are combined based on matching fields.\n- Supports nested joins to allow complex queries involving multiple related records.\n**Implementation guidance**\n- Use valid join names as defined in NetSuite’s schema or documentation for the specific record types.\n- Ensure the join criteria align with the intended relationship to avoid incorrect or empty query results.\n- Combine with appropriate filters on the joined records to refine query results.\n- Validate join paths to prevent errors during query execution.\n**Examples**\n- Joining a customer record to its related sales orders using \"salesOrder\" as the join.\n- Using \"item\" join to filter transactions based on item attributes.\n- Nested join example: joining from a sales order to its customer and then to the customer’s address.\n**Important notes**\n- Incorrect join names or paths can cause query failures or unexpected results.\n- Joins may impact query performance; use them judiciously.\n- Not all record types support all possible joins; consult NetSuite documentation.\n- Joins are case-sensitive and must match NetSuite’s API specifications exactly.\n**Dependency chain**\n- Depends on the base record type specified in the query.\n- Works in conjunction with filter criteria to refine results.\n- May depend on authentication and permissions to access related records.\n**Technical details**\n- Typically represented as a string or object indicating the join path.\n- Used within the criteria object of a RESTlet query payload.\n- Supports multiple levels of nesting for complex joins.\n- Must conform to NetSuite’s SuiteScript or RESTlet API join syntax and conventions."},"operator":{"type":"string","description":"operator: >\n  Specifies the comparison operator used to evaluate the criteria in a NetSuite RESTlet request.\n  This operator determines how the field value is compared against the specified criteria value(s) to filter or query records.\n  **Field behavior**\n  - Defines the type of comparison between a field and a value (e.g., equality, inequality, greater than).\n  - Influences the logic of the criteria evaluation in RESTlet queries.\n  - Supports various operators such as equals, not equals, greater than, less than, contains, etc.\n  **Implementation guidance**\n  - Use valid NetSuite-supported operators to ensure correct query behavior.\n  - Match the operator type with the data type of the field being compared (e.g., use numeric operators for numeric fields).\n  - Combine multiple criteria with appropriate logical operators if needed.\n  - Validate operator values to prevent errors in RESTlet execution.\n  **Examples**\n  - \"operator\": \"is\" (checks if the field value is equal to the specified value)\n  - \"operator\": \"isnot\" (checks if the field value is not equal to the specified value)\n  - \"operator\": \"greaterthan\" (checks if the field value is greater than the specified value)\n  - \"operator\": \"contains\" (checks if the field value contains the specified substring)\n  **Important notes**\n  - The operator must be compatible with the field type and the value provided.\n  - Incorrect operator usage can lead to unexpected query results or errors.\n  - Operators are case-sensitive and should match NetSuite's expected operator strings.\n  **Dependency chain**\n  - Depends on the field specified in the criteria to determine valid operators.\n  - Works in conjunction with the criteria value(s) to form a complete condition.\n  - May be combined with logical operators when multiple criteria are used.\n  **Technical details**\n  - Typically represented as a string value in the RESTlet criteria JSON object.\n  - Supported operators align with NetSuite's SuiteScript search operators.\n  - Must conform to the list of operators recognized by the NetSuite RESTlet API."},"searchValue":{"type":"object","description":"searchValue: The value used as the search criterion to filter results in the NetSuite RESTlet API. This value is matched against the specified search field to retrieve relevant records based on the search parameters provided.\n**Field behavior**\n- Acts as the primary input for filtering search results.\n- Supports various data types depending on the search field (e.g., string, number, date).\n- Used in conjunction with other search criteria to refine query results.\n- Can be a partial or full match depending on the search configuration.\n**Implementation guidance**\n- Ensure the value type matches the expected type of the search field.\n- Validate the input to prevent injection attacks or malformed queries.\n- Use appropriate encoding if the value contains special characters.\n- Combine with logical operators or additional criteria for complex searches.\n**Examples**\n- \"Acme Corporation\" for searching customer names.\n- 1001 for searching by internal record ID.\n- \"2024-01-01\" for searching records created on or after a specific date.\n- \"Pending\" for filtering records by status.\n**Important notes**\n- The effectiveness of the search depends on the accuracy and format of the searchValue.\n- Case sensitivity may vary based on the underlying NetSuite configuration.\n- Large or complex search values may impact performance.\n- Null or empty values may result in no filtering or return all records.\n**Dependency chain**\n- Depends on the searchField property to determine which field the searchValue applies to.\n- Works alongside searchOperator to define how the searchValue is compared.\n- Influences the results returned by the RESTlet endpoint.\n**Technical details**\n- Typically passed as a string in the API request payload.\n- May require serialization or formatting based on the API specification.\n- Integrated into the NetSuite search query logic on the server side.\n- Subject to NetSuite’s search limitations and indexing capabilities."},"searchValue2":{"type":"object","description":"searchValue2 is an optional property used to specify the second value in a search criterion within the NetSuite RESTlet API. It is typically used in conjunction with search operators that require two values, such as \"between\" or \"not between,\" to define a range or a pair of comparison values.\n\n**Field behavior**\n- Represents the second operand or value in a search condition.\n- Used primarily with operators that require two values (e.g., \"between\", \"not between\").\n- Optional field; may be omitted if the operator only requires a single value.\n- Works alongside searchValue (the first value) to form a complete search criterion.\n\n**Implementation guidance**\n- Ensure that searchValue2 is provided only when the selected operator requires two values.\n- Validate the data type of searchValue2 to match the expected type for the field being searched (e.g., date, number, string).\n- When using range-based operators, searchValue2 should represent the upper bound or second boundary of the range.\n- If the operator does not require a second value, omit this property to avoid errors.\n\n**Examples**\n- For a date range search: searchValue = \"2023-01-01\", searchValue2 = \"2023-12-31\" with operator \"between\".\n- For a numeric range: searchValue = 100, searchValue2 = 200 with operator \"between\".\n- For a \"not between\" operator: searchValue = 50, searchValue2 = 100.\n\n**Important notes**\n- Providing searchValue2 without a compatible operator may result in an invalid search query.\n- The data type and format of searchValue2 must be consistent with searchValue and the field being queried.\n- This property is ignored if the operator only requires a single value.\n- Proper validation and error handling should be implemented when processing this field.\n\n**Dependency chain**\n- Dependent on the \"operator\" property within the same search criterion.\n- Works in conjunction with \"searchValue\" to define the search condition.\n- Part of the \"criteria\" array or object in the NetSuite RESTlet search request.\n\n**Technical details**\n- Data type varies depending on the field being searched (string, number, date, etc.).\n- Typically serialized as a JSON property in the RESTlet request payload.\n- Must conform to the expected format for the field and operator to avoid API errors.\n- Used internally by NetSuite to construct the appropriate"},"formula":{"type":"string","description":"formula: >\n  A string representing a custom formula used to define criteria for filtering or querying data within the NetSuite RESTlet API.\n  This formula allows users to specify complex conditions using NetSuite's formula syntax, enabling advanced and flexible data retrieval.\n  **Field behavior**\n  - Accepts a formula expression as a string that defines custom filtering logic.\n  - Used to create dynamic and complex criteria beyond standard field-value comparisons.\n  - Evaluated by the NetSuite backend to filter records according to the specified logic.\n  - Can incorporate NetSuite formula functions, operators, and field references.\n  **Implementation guidance**\n  - Ensure the formula syntax complies with NetSuite's formula language and supported functions.\n  - Validate the formula string before submission to avoid runtime errors.\n  - Use this field when standard criteria fields are insufficient for the required filtering.\n  - Combine with other criteria fields as needed to build comprehensive queries.\n  **Examples**\n  - \"CASE WHEN {status} = 'Open' THEN 1 ELSE 0 END = 1\"\n  - \"TO_DATE({createddate}) >= TO_DATE('2023-01-01')\"\n  - \"NVL({amount}, 0) > 1000\"\n  **Important notes**\n  - Incorrect or invalid formulas may cause the API request to fail or return errors.\n  - The formula must be compatible with the context of the query and the fields available.\n  - Performance may be impacted if complex formulas are used extensively.\n  - Formula evaluation is subject to NetSuite's formula engine capabilities and limitations.\n  **Dependency chain**\n  - Depends on the availability of fields referenced within the formula.\n  - Works in conjunction with other criteria properties in the request.\n  - Requires understanding of NetSuite's formula syntax and functions.\n  **Technical details**\n  - Data type: string.\n  - Supports NetSuite formula syntax including SQL-like expressions and functions.\n  - Evaluated server-side during the processing of the RESTlet request.\n  - Must be URL-encoded if included in query parameters of HTTP requests."},"_id":{"type":"object","description":"Unique identifier for the record within the NetSuite system.\n**Field behavior**\n- Serves as the primary key to uniquely identify a specific record.\n- Immutable once the record is created.\n- Used to retrieve, update, or delete the corresponding record.\n**Implementation guidance**\n- Must be a valid NetSuite internal ID format, typically a string or numeric value.\n- Should be provided when querying or manipulating a specific record.\n- Avoid altering this value to maintain data integrity.\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"abcde12345\"\n**Important notes**\n- This ID is assigned by NetSuite and should not be generated manually.\n- Ensure the ID corresponds to an existing record to avoid errors.\n- When used in criteria, it filters the dataset to the exact record matching this ID.\n**Dependency chain**\n- Dependent on the existence of the record in NetSuite.\n- Often used in conjunction with other criteria fields for precise querying.\n**Technical details**\n- Typically represented as a string or integer data type.\n- Used in RESTlet scripts as part of the criteria object to specify the target record.\n- May be included in URL parameters or request bodies depending on the API design."}},"description":"criteria: >\n  Defines the set of conditions or filters used to specify which records should be retrieved or affected by the NetSuite RESTlet operation. This property enables clients to precisely narrow down the dataset by applying one or more criteria based on record fields, comparison operators, and values, supporting complex logical combinations to tailor the query results.\n\n  **Field behavior**\n  - Accepts a structured object or an array representing one or multiple filtering conditions.\n  - Supports logical operators such as AND, OR, and nested groupings to combine multiple criteria flexibly.\n  - Each criterion typically includes a field name, an operator (e.g., equals, contains, greaterThan), and a value or set of values.\n  - Enables filtering on various data types including strings, numbers, dates, and booleans.\n  - Used to limit the scope of data returned or manipulated by the RESTlet to only those records that meet the specified conditions.\n  - When omitted or empty, the RESTlet may return all records or apply default filtering behavior as defined by the implementation.\n\n  **Implementation guidance**\n  - Validate the criteria structure rigorously to ensure it conforms to the expected schema before processing.\n  - Support nested criteria groups to allow complex and hierarchical filtering logic.\n  - Map criteria fields and operators accurately to corresponding NetSuite record fields and search operators, considering data types and operator compatibility.\n  - Handle empty or undefined criteria gracefully by returning all records or applying sensible default filters.\n  - Sanitize all input values to prevent injection attacks, malformed queries, or unexpected behavior.\n  - Provide clear error messages when criteria are invalid or unsupported.\n  - Optimize query performance by translating criteria into efficient NetSuite search queries.\n\n  **Examples**\n  - A single criterion filtering records where status equals \"Open\":\n    `{ \"field\": \"status\", \"operator\": \"equals\", \"value\": \"Open\" }`\n  - Multiple criteria combined with AND logic:\n    `[{\"field\": \"status\", \"operator\": \"equals\", \"value\": \"Open\"}, {\"field\": \"priority\", \"operator\": \"greaterThan\", \"value\": 2}]`\n  - Nested criteria combining OR and AND:\n    `{ \"operator\": \"OR\", \"criteria\": [ {\"field\": \"status\", \"operator\": \"equals\", \"value\": \"Open\"}, { \"operator\": \"AND\", \"criteria\": [ {\"field\": \"priority\", \"operator\": \"greaterThan\", \"value\":"},"columns":{"type":"object","properties":{"type":{"type":"array","items":{"type":"object"},"description":"The type property specifies the data type of the column in the NetSuite RESTlet response. It defines how the data in the column should be interpreted, validated, and handled by the client application to ensure accurate processing and display.\n\n**Field behavior**\n- Indicates the specific data type of the column (e.g., string, integer, date).\n- Determines the format, validation rules, and parsing logic applied to the column data.\n- Guides client applications in correctly interpreting and processing the data.\n- Influences data presentation, transformation, and serialization in the user interface or downstream systems.\n- Helps enforce data consistency and integrity across different components consuming the API.\n\n**Implementation guidance**\n- Use standardized data type names consistent with NetSuite’s native data types and conventions.\n- Ensure the specified type accurately reflects the actual data returned in the column to prevent parsing or runtime errors.\n- Support and validate common data types such as text (string), number (integer, float), date, datetime, boolean, and currency.\n- Validate the type value against a predefined, documented list of acceptable types to maintain consistency.\n- Clearly document any custom or extended data types if they are introduced beyond standard NetSuite types.\n- Consider locale and formatting standards (e.g., ISO 8601 for dates) when defining and interpreting types.\n\n**Examples**\n- \"string\" — for textual or alphanumeric data.\n- \"integer\" — for whole numeric values without decimals.\n- \"float\" — for numeric values with decimals (if supported).\n- \"date\" — for date values without time components, formatted as YYYY-MM-DD.\n- \"datetime\" — for combined date and time values, typically in ISO 8601 format.\n- \"boolean\" — for true/false or yes/no values.\n- \"currency\" — for monetary values, often including currency symbols or codes.\n\n**Important notes**\n- The type property is essential for ensuring data integrity and enabling correct client-side processing and validation.\n- Incorrect or mismatched type specifications can lead to data misinterpretation, parsing failures, or runtime errors.\n- Some data types require strict formatting standards (e.g., ISO 8601 for date and datetime) to ensure interoperability.\n- This property is typically mandatory for each column to guarantee predictable behavior.\n- Changes to the type property should be managed carefully to avoid breaking existing integrations.\n\n**Dependency chain**\n- Depends on the actual data returned by the NetSuite RESTlet for the column.\n- Influences"},"join":{"type":"string","description":"join: Specifies the join relationship to be used when retrieving or manipulating data through the NetSuite RESTlet API. This property defines how related records are linked together, enabling the inclusion of fields from associated records in the query or operation.\n\n**Field behavior**\n- Determines the type of join between the primary record and related records.\n- Enables access to fields from related records by specifying the join path.\n- Influences the scope and depth of data retrieved or affected by the API call.\n- Supports nested joins to traverse multiple levels of related records.\n\n**Implementation guidance**\n- Use valid join names as defined in the NetSuite schema for the specific record type.\n- Ensure the join relationship exists and is supported by the RESTlet endpoint.\n- Combine with column definitions to specify which fields from the joined records to include.\n- Validate join paths to prevent errors or unexpected results in data retrieval.\n- Consider performance implications when using multiple or complex joins.\n\n**Examples**\n- \"customer\" to join the transaction record with the related customer record.\n- \"item\" to join a sales order with the associated item records.\n- \"employee.manager\" to join an employee record with their manager's record.\n- \"vendor\" to join a purchase order with the vendor record.\n\n**Important notes**\n- Incorrect or unsupported join names will result in API errors.\n- Joins are case-sensitive and must match the exact join names defined in NetSuite.\n- Not all record types support all join relationships.\n- The join property works in conjunction with the columns property to specify which fields to retrieve.\n- Using joins may increase the complexity and execution time of the API call.\n\n**Dependency chain**\n- Depends on the base record type being queried or manipulated.\n- Works together with the columns property to define the data structure.\n- May depend on user permissions to access related records.\n- Influences the structure of the response payload.\n\n**Technical details**\n- Represented as a string indicating the join path.\n- Supports dot notation for nested joins (e.g., \"employee.manager\").\n- Used in RESTlet scripts to customize data retrieval.\n- Must conform to NetSuite's internal join naming conventions.\n- Typically included in the columns array objects to specify joined fields."},"summary":{"type":"object","properties":{"type":{"type":"string","description":"Type of the summary column, indicating the aggregation or calculation applied to the data in this column.\n**Field behavior**\n- Specifies the kind of summary operation performed on the column data, such as sum, count, average, minimum, or maximum.\n- Determines how the data in the column is aggregated or summarized in the report or query result.\n- Influences the output format and the meaning of the values in the summary column.\n**Implementation guidance**\n- Use predefined summary types supported by the NetSuite RESTlet API to ensure compatibility.\n- Validate the type value against allowed summary operations to prevent errors.\n- Ensure that the summary type is appropriate for the data type of the column (e.g., sum for numeric fields).\n- Document the summary type clearly to aid users in understanding the aggregation applied.\n**Examples**\n- \"SUM\" — calculates the total sum of the values in the column.\n- \"COUNT\" — counts the number of entries or records.\n- \"AVG\" — computes the average value.\n- \"MIN\" — finds the minimum value.\n- \"MAX\" — finds the maximum value.\n**Important notes**\n- The summary type must be supported by the underlying NetSuite system to function correctly.\n- Incorrect summary types may lead to errors or misleading data in reports.\n- Some summary types may not be applicable to certain data types (e.g., average on text fields).\n**Dependency chain**\n- Depends on the column data type to determine valid summary types.\n- Interacts with the overall report or query configuration to produce summarized results.\n- May affect downstream processing or display logic based on the summary output.\n**Technical details**\n- Typically represented as a string value corresponding to the summary operation.\n- Mapped internally to NetSuite’s summary functions in saved searches or reports.\n- Case-insensitive but recommended to use uppercase for consistency.\n- Must conform to the enumeration of allowed summary types defined by the API."},"enum":{"type":"object","description":"enum: >\n  Specifies the set of predefined constant values that the property can take, representing an enumeration.\n  **Field behavior**\n  - Defines a fixed list of allowed values for the property.\n  - Restricts the property's value to one of the enumerated options.\n  - Used to enforce data integrity and consistency.\n  **Implementation guidance**\n  - Enumerated values should be clearly defined and documented.\n  - Use meaningful and descriptive names for each enum value.\n  - Ensure the enum list is exhaustive for the intended use case.\n  - Validate input against the enum values to prevent invalid data.\n  **Examples**\n  - [\"Pending\", \"Approved\", \"Rejected\"]\n  - [\"Small\", \"Medium\", \"Large\"]\n  - [\"Red\", \"Green\", \"Blue\"]\n  **Important notes**\n  - Enum values are case-sensitive unless otherwise specified.\n  - Adding or removing enum values may impact backward compatibility.\n  - Enum should be used when the set of possible values is known and fixed.\n  **Dependency chain**\n  - Typically used in conjunction with the property type (e.g., string or integer).\n  - May influence validation logic and UI dropdown options.\n  **Technical details**\n  - Represented as an array of strings or numbers defining allowed values.\n  - Often implemented as a constant or static list in code.\n  - Used by client and server-side validation mechanisms."},"lowercase":{"type":"object","description":"A boolean property indicating whether the summary column values should be converted to lowercase.\n\n**Field behavior**\n- When set to true, all text values in the summary column are transformed to lowercase.\n- When set to false or omitted, the original casing of the summary column values is preserved.\n- Primarily affects string-type summary columns; non-string values remain unaffected.\n\n**Implementation guidance**\n- Use this property to normalize text data for consistent processing or comparison.\n- Ensure that the transformation to lowercase does not interfere with case-sensitive data requirements.\n- Apply this setting during data retrieval or before output formatting in the RESTlet response.\n\n**Examples**\n- `lowercase: true` — converts \"Example Text\" to \"example text\".\n- `lowercase: false` — retains \"Example Text\" as is.\n- Property omitted — defaults to no case transformation.\n\n**Important notes**\n- This property only affects the summary columns specified in the RESTlet response.\n- It does not modify the underlying data in NetSuite, only the output representation.\n- Use with caution when case sensitivity is important for downstream processing.\n\n**Dependency chain**\n- Depends on the presence of summary columns in the RESTlet response.\n- May interact with other formatting or transformation properties applied to columns.\n\n**Technical details**\n- Implemented as a boolean flag within the summary column configuration.\n- Transformation is applied at the data serialization stage before sending the response.\n- Compatible with string data types; other data types bypass this transformation."}},"description":"A brief textual summary providing an overview or key points related to the item or record represented by this property. This summary is intended to give users a quick understanding without needing to delve into detailed data.\n\n**Field behavior**\n- Contains concise, human-readable text summarizing the main aspects or highlights of the associated record.\n- Typically used in list views, reports, or previews where a quick reference is needed.\n- May be displayed in user interfaces to provide context or a snapshot of the data.\n- Can be optional or mandatory depending on the specific API implementation.\n\n**Implementation guidance**\n- Ensure the summary is clear, concise, and informative, avoiding overly technical jargon unless appropriate.\n- Limit the length to a reasonable number of characters to maintain readability in UI components.\n- Sanitize input to prevent injection of malicious content if the summary is user-generated.\n- Support localization if the API serves multi-lingual environments.\n- Update the summary whenever the underlying data it represents changes to keep it accurate.\n\n**Examples**\n- \"Quarterly sales report showing a 15% increase in revenue.\"\n- \"Customer profile summary including recent orders and contact info.\"\n- \"Inventory item overview highlighting stock levels and reorder status.\"\n- \"Project status summary indicating milestones achieved and pending tasks.\"\n\n**Important notes**\n- The summary should not contain sensitive or confidential information unless properly secured.\n- It is distinct from detailed descriptions or full data fields; it serves as a high-level overview.\n- Consistency in style and format across summaries improves user experience.\n- May be truncated in some UI contexts; design summaries to convey essential information upfront.\n\n**Dependency chain**\n- Often derived from or related to other detailed fields within the record.\n- May influence or be influenced by display components or reporting modules consuming this property.\n- Could be linked to user permissions determining visibility of summary content.\n\n**Technical details**\n- Data type is typically a string.\n- May have a maximum length constraint depending on API or database schema.\n- Stored and transmitted as plain text; consider encoding if special characters are used.\n- Accessible via the RESTlet API under the path `netsuite.restlet.columns.summary`."},"formula":{"type":"string","description":"formula: >\n  A string representing a custom formula used to calculate or derive values dynamically within the context of the NetSuite RESTlet columns. This formula can include field references, operators, and functions supported by NetSuite's formula syntax to perform computations or conditional logic on record data.\n\n  **Field behavior:**\n  - Accepts a formula expression as a string that defines how to compute the column's value.\n  - Can reference other fields, constants, and use NetSuite-supported functions and operators.\n  - Evaluated at runtime to produce dynamic results based on the current record data.\n  - Used primarily in saved searches, reports, or RESTlet responses to customize output.\n  \n  **Implementation guidance:**\n  - Ensure the formula syntax complies with NetSuite's formula language and supported functions.\n  - Validate the formula string to prevent errors during execution.\n  - Use field IDs or aliases correctly within the formula to reference data fields.\n  - Test formulas thoroughly in NetSuite UI before deploying via RESTlet to ensure correctness.\n  - Consider performance implications of complex formulas on large datasets.\n  \n  **Examples:**\n  - \"CASE WHEN {status} = 'Open' THEN 1 ELSE 0 END\" — returns 1 if status is Open, else 0.\n  - \"NVL({amount}, 0) * 0.1\" — calculates 10% of the amount, treating null as zero.\n  - \"TO_CHAR({trandate}, 'YYYY-MM-DD')\" — formats the transaction date as a string.\n  \n  **Important notes:**\n  - The formula must be compatible with the context in which it is used (e.g., search column, RESTlet).\n  - Incorrect formulas can cause runtime errors or unexpected results.\n  - Some functions or operators may not be supported depending on the NetSuite version or API context.\n  - Formula evaluation respects user permissions and data visibility.\n  \n  **Dependency chain:**\n  - Depends on the availability of referenced fields within the record or search context.\n  - Relies on NetSuite's formula parsing and evaluation engine.\n  - Interacts with the RESTlet execution environment to produce output.\n  \n  **Technical details:**\n  - Data type: string containing a formula expression.\n  - Supports NetSuite formula syntax including SQL-like CASE statements, arithmetic operations, and built-in functions.\n  - Evaluated server-side during RESTlet execution or saved search processing.\n  - Must be URL-"},"label":{"type":"string","description":"label: |\n  The display name or title of the column as it appears in the user interface or reports.\n  **Field behavior**\n  - Represents the human-readable name for a column in a dataset or report.\n  - Used to identify the column in UI elements such as tables, forms, or export files.\n  - Should be concise yet descriptive enough to convey the column’s content.\n  **Implementation guidance**\n  - Ensure the label is localized if the application supports multiple languages.\n  - Avoid using technical jargon; prefer user-friendly terminology.\n  - Keep the label length reasonable to prevent UI truncation.\n  - Update the label consistently when the underlying data or purpose changes.\n  **Examples**\n  - \"Customer Name\"\n  - \"Invoice Date\"\n  - \"Total Amount\"\n  - \"Status\"\n  **Important notes**\n  - The label does not affect the data or the column’s functionality; it is purely for display.\n  - Changing the label does not impact data processing or storage.\n  - Labels should be unique within the same context to avoid confusion.\n  **Dependency chain**\n  - Depends on the column definition within the dataset or report configuration.\n  - May be linked to localization resources if internationalization is supported.\n  **Technical details**\n  - Typically a string data type.\n  - May support Unicode characters for internationalization.\n  - Stored as metadata associated with the column definition in the system."},"sort":{"type":"boolean","description":"sort: >\n  Specifies the sorting order for the column values in the query results.\n  **Field behavior**\n  - Determines the order in which the data is returned based on the column values.\n  - Accepts values that indicate ascending or descending order.\n  - Influences how the dataset is organized before being returned by the API.\n  **Implementation guidance**\n  - Use standardized values such as \"asc\" for ascending and \"desc\" for descending.\n  - Ensure the sort parameter corresponds to a valid column in the dataset.\n  - Multiple sort parameters may be supported to define secondary sorting criteria.\n  - Validate the sort value to prevent errors or unexpected behavior.\n  **Examples**\n  - \"asc\" to sort the column values in ascending order.\n  - \"desc\" to sort the column values in descending order.\n  **Important notes**\n  - Sorting can impact performance, especially on large datasets.\n  - If no sort parameter is provided, the default sorting behavior of the API applies.\n  - Sorting is case-insensitive.\n  **Dependency chain**\n  - Depends on the column specified in the query or request.\n  - May interact with pagination parameters to determine the final data output.\n  **Technical details**\n  - Typically implemented as a string value in the API request.\n  - May be part of a query string or request body depending on the API design.\n  - Sorting logic is handled server-side before data is returned to the client."}},"description":"columns: >\n  Specifies the set of columns (fields) to be retrieved or manipulated in the NetSuite RESTlet operation. This property defines which specific fields from the records should be included in the response or used during processing, enabling precise control over the data returned or affected. By selecting only relevant columns, it helps optimize performance and reduce payload size, ensuring efficient data handling tailored to the operation’s requirements.\n\n  **Field behavior**\n  - Determines the exact fields (columns) to be included in data retrieval, update, or manipulation operations.\n  - Supports specifying multiple columns to customize the dataset returned or processed.\n  - Limits the data payload by including only the specified columns, improving performance and reducing bandwidth.\n  - Influences the structure, content, and size of the response from the RESTlet.\n  - If omitted, defaults to retrieving all available columns for the target record type, which may impact performance.\n  - Columns specified must be valid and accessible for the target record type to avoid errors.\n\n  **Implementation guidance**\n  - Accepts an array or list of column identifiers, which can be simple strings or objects with detailed specifications (e.g., `{ name: \"fieldname\" }`).\n  - Column identifiers should correspond exactly to valid NetSuite record field names or internal IDs.\n  - Validate column names against the target record schema before execution to prevent runtime errors.\n  - Use this property to optimize RESTlet calls by limiting data to only necessary fields, especially in large datasets.\n  - When specifying complex columns (e.g., joined fields or formula fields), ensure the correct syntax and structure are used.\n  - Consider the permissions and roles associated with the RESTlet user to ensure access to the specified columns.\n\n  **Examples**\n  - `[\"internalid\", \"entityid\", \"email\"]` — retrieves basic identifying and contact fields.\n  - `[ { name: \"internalid\" }, { name: \"entityid\" }, { name: \"email\" } ]` — object notation for specifying columns.\n  - `[\"tranid\", \"amount\", \"status\"]` — retrieves transaction-specific fields.\n  - `[ { name: \"custbody_custom_field\" }, { name: \"createddate\" } ]` — includes custom and system fields.\n  - `[\"item\", \"quantity\", \"rate\"]` — fields relevant to item records or line items.\n\n  **Important notes**\n  - Omitting the `columns` property typically"},"markExportedBatchSize":{"type":"object","properties":{"type":{"type":"number","description":"type: >\n  Specifies the data type of the `markExportedBatchSize` property, defining the kind of value it accepts or represents. This property is crucial for ensuring that the batch size value is correctly interpreted, validated, and processed by the API. It dictates how the value is serialized and deserialized during API communication, thereby maintaining data integrity and consistency across different system components.\n  **Field behavior**\n  - Determines the expected format and constraints of the `markExportedBatchSize` value.\n  - Influences validation rules applied to the batch size input to prevent invalid data.\n  - Guides serialization and deserialization processes for accurate data exchange.\n  - Ensures compatibility with client and server-side processing logic.\n  **Implementation guidance**\n  - Must be assigned a valid and recognized data type within the API schema, such as \"integer\" or \"string\".\n  - Should align precisely with the nature of the batch size value to avoid type mismatches.\n  - Implement strict validation checks to confirm the value conforms to the specified type before processing.\n  - Consider the implications of the chosen type on downstream processing and storage.\n  **Examples**\n  - `\"integer\"` — indicating the batch size is represented as a whole number.\n  - `\"string\"` — if the batch size is provided as a textual representation.\n  - `\"number\"` — for numeric values that may include decimals (less common for batch sizes).\n  **Important notes**\n  - The `type` must consistently reflect the actual data format of `markExportedBatchSize` to prevent runtime errors.\n  - Mismatched or incorrect type declarations can cause API failures, data corruption, or unexpected behavior.\n  - Changes to this property’s type should be carefully managed to maintain backward compatibility.\n  **Dependency chain**\n  - Directly defines the data handling of the `markExportedBatchSize` property.\n  - Affects validation logic and error handling in API endpoints related to batch processing.\n  - May impact client applications that consume or provide this property’s value.\n  **Technical details**\n  - Corresponds to standard JSON data types such as integer, string, boolean, etc.\n  - Utilized by the API framework to enforce type safety, ensuring data integrity during request and response cycles.\n  - Plays a role in schema validation tools and automated documentation generation.\n  - Influences serialization libraries in encoding and decoding the property value correctly."},"cLocked":{"type":"object","description":"cLocked indicates whether the batch size setting for marking exports is locked, preventing any modifications by users or automated processes. This property serves as a control mechanism to safeguard critical configuration parameters related to export batch processing.\n\n**Field behavior**\n- Represents a boolean flag that determines if the batch size configuration is immutable.\n- When set to true, the batch size cannot be altered via the user interface, API calls, or automated scripts.\n- When set to false, the batch size remains configurable and can be adjusted as operational needs evolve.\n- Changes to this flag directly affect the ability to update the batch size setting.\n\n**Implementation guidance**\n- Utilize this flag to enforce configuration stability and prevent accidental or unauthorized changes to batch size settings.\n- Validate this flag before processing any update requests to the batch size to ensure compliance.\n- Typically managed by system administrators or during initial system setup to lock down critical parameters.\n- Incorporate audit logging when this flag is changed to maintain traceability.\n- Consider integrating with role-based access controls to restrict who can toggle this flag.\n\n**Examples**\n- cLocked: true — The batch size setting is locked, disallowing any modifications.\n- cLocked: false — The batch size setting is unlocked and can be updated as needed.\n\n**Important notes**\n- Locking the batch size helps maintain consistent export throughput and prevents performance degradation caused by unintended configuration changes.\n- Modifications to this flag should be performed cautiously and ideally under change management procedures.\n- This property is only applicable in environments where batch size configuration for marking exports is relevant.\n- Ensure that dependent systems or processes respect this lock to avoid configuration conflicts.\n\n**Dependency chain**\n- Dependent on the presence of the markExportedBatchSize configuration object within the system.\n- Interacts with user permission settings and roles that govern configuration management capabilities.\n- May affect downstream export processing workflows that rely on batch size parameters.\n\n**Technical details**\n- Data type: Boolean.\n- Default value is false, indicating the batch size is unlocked unless explicitly locked.\n- Persisted as part of the NetSuite.restlet.markExportedBatchSize configuration object.\n- Changes to this property should trigger validation and possibly system notifications to administrators."},"min":{"type":"object","description":"Minimum number of records to process in a single batch during the markExported operation in the NetSuite RESTlet integration. This property sets the lower boundary for batch sizes, ensuring that each batch contains at least this number of records before processing begins. It plays a crucial role in balancing processing efficiency and system resource utilization by controlling the granularity of batch operations.\n\n**Field behavior**\n- Defines the lower limit for the batch size when processing records in the markExported operation.\n- Ensures that each batch contains at least this minimum number of records before processing.\n- Helps optimize performance by preventing excessively small batches that could increase overhead.\n- Works in conjunction with the 'max' batch size to establish a valid range for batch processing.\n- Influences how the system partitions large datasets into manageable chunks for processing.\n\n**Implementation guidance**\n- Choose a value based on system capabilities, expected record complexity, and API rate limits to avoid timeouts or throttling.\n- Ensure this value is a positive integer greater than zero.\n- Must be less than or equal to the corresponding 'max' batch size to maintain logical consistency.\n- Test different values to find an optimal balance between processing speed and resource consumption.\n- Consider the impact on downstream systems and network latency when setting this value.\n\n**Examples**\n- 10 (process at least 10 records per batch)\n- 50 (process at least 50 records per batch)\n- 100 (process at least 100 records per batch for high-throughput scenarios)\n\n**Important notes**\n- Setting this value too low may lead to inefficient processing due to increased overhead from handling many small batches.\n- Setting this value too high may cause processing delays, timeouts, or exceed API rate limits.\n- Always use in conjunction with the 'max' batch size to define a valid and effective batch size range.\n- Changes to this value should be tested in a staging environment before production deployment to assess impact.\n\n**Dependency chain**\n- Directly related to 'netsuite.restlet.markExportedBatchSize.max', which defines the upper limit of batch size.\n- Utilized within the batch processing logic of the markExported operation in the NetSuite RESTlet integration.\n- Influences and is influenced by system performance parameters and API constraints.\n\n**Technical details**\n- Data type: Integer\n- Must be a positive integer greater than zero\n- Should be validated at configuration time to ensure it does not exceed the 'max' batch size"},"max":{"type":"object","description":"Maximum number of records to process in a single batch during the markExported operation, defining the upper limit for batch size to balance performance and resource utilization effectively.\n\n**Field behavior**\n- Specifies the maximum count of records processed in one batch during the markExported operation.\n- Controls the batch size to optimize throughput while preventing system overload.\n- Helps manage memory usage and processing time by limiting batch volume.\n- Directly affects the frequency and size of API calls or processing cycles.\n\n**Implementation guidance**\n- Determine an optimal value based on system capacity, performance benchmarks, and typical workload.\n- Ensure the value complies with any API or platform-imposed batch size limits.\n- Consider network conditions, processing latency, and error handling when setting the batch size.\n- Validate that the input is a positive integer and handle invalid values gracefully.\n- Adjust dynamically if possible, based on runtime metrics or error feedback.\n\n**Examples**\n- 1000: Processes up to 1000 records per batch, suitable for balanced performance.\n- 500: Smaller batch size for environments with limited resources or higher reliability needs.\n- 2000: Larger batch size for high-throughput scenarios where system resources allow.\n- 50: Very small batch size for testing or debugging purposes.\n\n**Important notes**\n- Excessively high values may lead to timeouts, memory exhaustion, or degraded system responsiveness.\n- Very low values can increase overhead due to more frequent batch processing cycles.\n- This parameter is critical for tuning the performance and stability of the markExported operation.\n- Changes to this value should be tested in a controlled environment before production deployment.\n\n**Dependency chain**\n- Integral to the batch processing logic within the markExported operation.\n- Interacts with system-level batch size constraints and API rate limits.\n- Influences how records are chunked and iterated during export marking.\n- May affect downstream processing components that consume batch outputs.\n\n**Technical details**\n- Must be an integer value greater than zero.\n- Typically configured via API request parameters or system configuration files.\n- Should be compatible with the data processing framework and any middleware handling batch operations.\n- May require synchronization with other batch-related settings to ensure consistency."}},"description":"markExportedBatchSize: The number of records to process in each batch when marking records as exported in NetSuite via the RESTlet API. This setting controls how many records are updated in a single API call to optimize performance and resource usage during the export marking process.\n\n**Field behavior**\n- Determines the size of each batch of records to be marked as exported in NetSuite.\n- Controls the number of records processed per RESTlet API call for export status updates.\n- Directly impacts the throughput and efficiency of the export marking operation.\n- Influences the balance between processing speed and system resource consumption.\n- Helps manage API rate limits by controlling the volume of records processed per request.\n\n**Implementation guidance**\n- Select a batch size that balances efficient processing with system stability and API constraints.\n- Avoid excessively large batch sizes to prevent API timeouts, memory exhaustion, or throttling.\n- Consider the typical volume of records to be exported and the performance characteristics of your NetSuite environment.\n- Test various batch sizes under realistic load conditions to identify the optimal value.\n- Monitor API response times and error rates to adjust the batch size dynamically if needed.\n- Ensure compatibility with any rate limiting or concurrency restrictions imposed by the NetSuite RESTlet API.\n\n**Examples**\n- Setting `markExportedBatchSize` to 100 processes 100 records per batch, suitable for moderate workloads.\n- Using a batch size of 500 may be appropriate for high-volume exports on systems with robust resources.\n- A smaller batch size like 50 can help avoid API throttling or timeouts in environments with limited resources or strict rate limits.\n- Adjusting the batch size to 200 after observing API latency improvements the overall export marking throughput.\n\n**Important notes**\n- The batch size setting directly affects the speed, reliability, and resource utilization of marking records as exported.\n- Incorrect batch size configurations can cause partial updates, failed API calls, or increased processing times.\n- This property is specific to the RESTlet-based integration with NetSuite and does not apply to other export mechanisms.\n- Changes to this setting should be tested thoroughly to avoid unintended disruptions in the export workflow.\n- Consider the impact on downstream processes that depend on timely and accurate export status updates.\n\n**Dependency chain**\n- Depends on the RESTlet API endpoint responsible for marking records as exported in NetSuite.\n- Influenced by NetSuite API rate limits, timeout settings, and system performance characteristics.\n- Works in conjunction with other export configuration parameters"},"TODO":{"type":"object","description":"TODO: A placeholder property used to indicate tasks, features, or sections within the NetSuite RESTlet integration that require implementation, completion, or further development. This property functions as a clear marker for developers and project managers to identify areas that are pending work, ensuring that these tasks are tracked and addressed before finalizing the API. It is not intended to hold any functional data or be part of the production API contract until fully implemented.\n\n**Field behavior**\n- Serves as a temporary indicator for incomplete, pending, or planned tasks within the API schema.\n- Does not contain operational data or affect API functionality until properly defined and implemented.\n- Helps track development progress and highlight areas needing attention during the development lifecycle.\n- Should be removed or replaced with finalized implementations once the associated task is completed.\n- May be used to generate reports or dashboards reflecting outstanding development work.\n\n**Implementation guidance**\n- Utilize the TODO property to explicitly flag API sections requiring further coding, configuration, or review.\n- Accompany TODO entries with detailed comments or references to issue tracking systems (e.g., JIRA, GitHub Issues) for clarity and traceability.\n- Establish regular review cycles to update, resolve, or remove TODO properties to maintain an accurate representation of development status.\n- Avoid deploying TODO properties in production environments to prevent confusion, incomplete features, or potential runtime errors.\n- Integrate TODO tracking with project management workflows to ensure timely resolution.\n\n**Examples**\n- TODO: Implement OAuth 2.0 authentication mechanism for RESTlet endpoints.\n- TODO: Add comprehensive validation rules for input parameters to ensure data integrity.\n- TODO: Complete error handling and logging for data retrieval failures.\n- TODO: Optimize response payload size for improved performance.\n- TODO: Integrate unit tests covering all new RESTlet functionalities.\n\n**Important notes**\n- The presence of TODO properties signifies incomplete or provisional functionality and should not be interpreted as finalized API features.\n- Unresolved TODO items can lead to partial implementations, unexpected behavior, or runtime errors if not addressed before release.\n- Effective management and timely resolution of TODO properties are critical for maintaining code quality, project timelines, and overall system stability.\n- TODO properties should be clearly documented and communicated within the development team to avoid oversight.\n\n**Dependency chain**\n- TODO properties may depend on other modules, services, or API components that are under development or pending integration.\n- Often linked to external project management or issue tracking tools for assignment, prioritization, and progress monitoring.\n- The"},"hooks":{"type":"object","properties":{"batchSize":{"type":"number","description":"batchSize specifies the number of records or items to be processed in a single batch during the execution of the NetSuite RESTlet hook. This parameter helps control the workload size for each batch operation, optimizing performance and resource utilization by balancing processing efficiency and system constraints.\n\n**Field behavior**\n- Determines the maximum number of records or items processed in one batch cycle.\n- Directly influences the frequency and duration of batch processing operations.\n- Helps manage memory consumption and processing time by limiting the batch workload.\n- Affects overall throughput and latency of batch operations, impacting system responsiveness.\n- Controls how data is segmented and processed in discrete units during RESTlet execution.\n\n**Implementation guidance**\n- Configure batchSize based on the system’s processing capacity, expected data volume, and performance goals.\n- Use smaller batch sizes in environments with limited resources or strict execution time limits to prevent timeouts.\n- Larger batch sizes can improve throughput by reducing the number of batch cycles but may increase individual batch processing time and risk of hitting governance limits.\n- Always validate that batchSize is a positive integer greater than zero to ensure proper operation.\n- Take into account NetSuite API governance limits, such as usage units and execution time, when determining batchSize.\n- Monitor system performance and adjust batchSize dynamically if possible to optimize processing efficiency.\n- Ensure batchSize aligns with other batch-related configurations to maintain consistency and predictable behavior.\n\n**Examples**\n- batchSize: 100 — processes 100 records per batch, balancing throughput and resource use.\n- batchSize: 500 — processes 500 records per batch for higher throughput in robust environments.\n- batchSize: 10 — processes 10 records per batch for fine-grained control and minimal resource impact.\n- batchSize: 1 — processes records individually, useful for debugging or very resource-sensitive scenarios.\n\n**Important notes**\n- Excessively high batchSize values may cause processing timeouts, exceed NetSuite governance limits, or lead to memory exhaustion.\n- Very low batchSize values can result in inefficient processing due to increased overhead and more frequent batch invocations.\n- The optimal batchSize is context-dependent and should be determined through testing and monitoring.\n- batchSize should be consistent with other batch processing parameters to avoid conflicts or unexpected behavior.\n- Changes to batchSize may require adjustments in error handling and retry logic to accommodate different batch sizes.\n\n**Dependency chain**\n- Depends on the batch processing logic implemented within the RESTlet hook.\n- Influences and is influenced by"},"preSend":{"type":"object","properties":{"fileInternalId":{"type":"string","description":"fileInternalId: The unique internal identifier assigned to a file within the NetSuite system. This identifier is used to precisely reference and manipulate a specific file during API operations, particularly within pre-send processing hooks in RESTlets. It ensures accurate targeting of file resources by uniquely identifying files stored in the NetSuite file cabinet.\n\n**Field behavior**\n- Represents a unique numeric or alphanumeric identifier assigned by NetSuite to each file.\n- Used to retrieve, update, or reference a file during the preSend hook execution.\n- Must correspond to an existing file within the NetSuite file cabinet.\n- Immutable throughout the file’s lifecycle; remains constant unless the file is deleted and recreated.\n- Serves as a key reference for file-related operations in automated workflows and integrations.\n\n**Implementation guidance**\n- Always validate that the fileInternalId exists and is accessible before performing operations.\n- Use this ID to fetch file metadata, content, or perform updates within the preSend hook.\n- Implement error handling to manage cases where the fileInternalId does not correspond to a valid or accessible file.\n- Ensure that the executing user or integration has the necessary permissions to access the file referenced by this ID.\n- Avoid hardcoding this ID; retrieve dynamically when possible to maintain flexibility and accuracy.\n\n**Examples**\n- 12345\n- \"67890\"\n- \"file_98765\"\n\n**Important notes**\n- The fileInternalId is specific to each NetSuite account and environment; it is not globally unique across different accounts.\n- Do not expose this identifier publicly, as it may reveal sensitive internal system details.\n- Modifications to the file’s name, location, or metadata do not affect the internal ID.\n- This ID is essential for linking files reliably in automated processes, integrations, and RESTlet hooks.\n- Deleting and recreating a file will result in a new fileInternalId.\n\n**Dependency chain**\n- Depends on the existence of the file within the NetSuite file cabinet.\n- Requires appropriate permissions to access or manipulate the file.\n- Utilized within preSend hooks to reference files accurately during API operations.\n\n**Technical details**\n- Typically a numeric or alphanumeric string assigned by NetSuite upon file creation.\n- Stored internally within NetSuite’s database as the primary key for file records.\n- Used as a parameter in RESTlet API calls to identify and operate on specific files.\n- Immutable identifier that does not change unless the file is deleted and recreated.\n- Integral to"},"function":{"type":"string","description":"function: >\n  Specifies the name of the custom function to be invoked during the preSend hook phase in the NetSuite RESTlet integration. This function enables developers to implement custom logic for processing or modifying the request payload immediately before it is dispatched to the NetSuite RESTlet endpoint. It serves as a critical extension point for tailoring request data, adding headers, sanitizing inputs, or performing any preparatory steps necessary to meet integration requirements.\n\n  **Field behavior**\n  - Identifies the exact function to execute during the preSend hook phase.\n  - Allows customization and transformation of the outgoing request payload or context.\n  - The specified function is called synchronously or asynchronously depending on implementation support.\n  - Modifications made by this function directly affect the data sent to the NetSuite RESTlet.\n  - Must reference a valid, accessible function within the integration’s runtime environment.\n\n  **Implementation guidance**\n  - Confirm that the function name matches a defined and exported function within the integration codebase.\n  - The function should accept the current request payload or context as input and return the modified payload or context.\n  - Implement robust error handling within the function to avoid unhandled exceptions that could disrupt the request flow.\n  - Optimize the function for performance to minimize latency in request processing.\n  - If asynchronous operations are supported, ensure proper handling of promises or callbacks.\n  - Document the function’s behavior clearly to facilitate maintenance and future updates.\n\n  **Examples**\n  - \"sanitizePayload\" — cleans and validates request data before sending.\n  - \"addAuthenticationHeaders\" — injects necessary authentication tokens or headers.\n  - \"transformRequestData\" — restructures or enriches the payload to match API expectations.\n  - \"logRequestDetails\" — captures request metadata for auditing or debugging purposes.\n\n  **Important notes**\n  - The function must be correctly implemented and accessible; otherwise, runtime errors will occur.\n  - This hook executes immediately before the request is sent, so any changes here directly impact the outgoing data.\n  - Ensure that the function’s side effects do not unintentionally alter unrelated parts of the request or integration state.\n  - If asynchronous processing is used, verify that the integration framework supports it to avoid unexpected behavior.\n  - Testing the function thoroughly is critical to ensure reliable integration behavior.\n\n  **Dependency chain**\n  - Requires the preSend hook to be enabled and properly configured in the integration settings.\n  - Depends on the presence of the named"},"configuration":{"type":"object","description":"configuration: >\n  An object containing configuration settings that influence the behavior of the preSend hook in the NetSuite RESTlet integration. This object serves as a centralized control point for customizing how requests are processed and modified before being sent to the NetSuite RESTlet endpoint. It can include a variety of parameters such as authentication credentials, logging preferences, request modification flags, timeout settings, retry policies, and feature toggles that tailor the preSend hook’s operation to specific integration needs.\n  **Field behavior**\n  - Holds key-value pairs that define how the preSend hook processes and modifies outgoing requests.\n  - Can include settings such as authentication parameters (e.g., tokens, API keys), request modification flags (e.g., header adjustments), logging options (e.g., enable/disable logging), timeout durations, retry counts, and feature toggles.\n  - Is accessed and potentially updated dynamically during the execution of the preSend hook to adapt request handling based on current context or conditions.\n  - Influences the flow and outcome of the preSend hook, potentially altering request payloads, headers, or other metadata before transmission.\n  **Implementation guidance**\n  - Define clear, descriptive, and consistent keys within the configuration object to avoid ambiguity and ensure maintainability.\n  - Validate all configuration values rigorously before applying them to prevent runtime errors or unexpected behavior.\n  - Use this object to centralize control over preSend hook behavior, enabling easier updates, debugging, and feature management.\n  - Document all possible configuration options, their expected data types, default values, and their specific effects on the preSend hook’s operation.\n  - Ensure sensitive information within the configuration (e.g., authentication tokens) is handled securely, following best practices for encryption and access control.\n  - Consider versioning the configuration schema if multiple versions of the preSend hook or integration exist.\n  **Examples**\n  - `{ \"enableLogging\": true, \"authToken\": \"abc123\", \"modifyHeaders\": false }`\n  - `{ \"retryCount\": 3, \"timeout\": 5000 }`\n  - `{ \"useNonProduction\": true, \"customHeader\": \"X-Custom-Value\" }`\n  - `{ \"authenticationType\": \"OAuth2\", \"refreshToken\": \"xyz789\", \"logLevel\": \"verbose\" }`\n  - `{ \"enableCaching\": false, \"maxRetries\": 5, \"requestPriority\": \"high\" }`\n  **Important**"}},"description":"preSend is a hook function that is executed immediately before a RESTlet sends a response back to the client. It allows for last-minute modifications or logging of the response data, enabling customization of the output or performing additional processing steps prior to transmission. This hook provides a critical interception point to ensure the response adheres to business rules, compliance requirements, or client-specific formatting before it leaves the server.\n\n**Field behavior**\n- Invoked right before the RESTlet response is sent to the client.\n- Receives the response data as input and can modify it.\n- Can be used to log, audit, or transform the response payload.\n- Should return the final response object to be sent.\n- Supports both synchronous and asynchronous execution depending on the implementation.\n- Any changes made here directly impact the final output received by the client.\n\n**Implementation guidance**\n- Implement as a synchronous or asynchronous function depending on the environment and use case.\n- Ensure any modifications maintain the expected response format and data integrity.\n- Avoid long-running or blocking operations to prevent delaying the response delivery.\n- Handle errors gracefully within the hook to prevent disrupting the overall RESTlet response flow.\n- Validate the modified response to ensure it complies with API schema and client expectations.\n- Use this hook to enforce security measures such as masking sensitive data or adding audit trails.\n\n**Examples**\n- Adding a timestamp or metadata (e.g., request ID, processing duration) to the response object.\n- Masking or removing sensitive information (e.g., personal identifiers, confidential fields) from the response.\n- Logging response details for auditing or debugging purposes.\n- Transforming response data structure or formatting to match client-specific requirements.\n- Injecting additional headers or status information into the response payload.\n\n**Important notes**\n- This hook runs after all business logic but before the response is finalized and sent.\n- Modifications here directly affect what the client ultimately receives.\n- Errors thrown in this hook may cause the RESTlet to fail or return an error response.\n- Use this hook to enforce response-level policies, compliance, or data governance rules.\n- Avoid introducing side effects that could alter the idempotency or consistency of the response.\n- Testing this hook thoroughly is critical to ensure it does not unintentionally break client integrations.\n\n**Dependency chain**\n- Triggered after the main RESTlet processing logic completes and the response object is prepared.\n- Precedes the actual sending of the HTTP response to the client.\n- May depend on prior hooks or"}},"description":"hooks: >\n  An array of hook definitions that specify custom functions to be executed at various points during the lifecycle of the RESTlet script in NetSuite. These hooks enable developers to inject additional logic before or after standard processing events, allowing for extensive customization and extension of the RESTlet's behavior to meet specific business requirements.\n\n  **Field behavior**\n  - Defines one or more hooks that trigger custom code execution at designated lifecycle events.\n  - Hooks can be configured to run at standard lifecycle events such as beforeLoad, beforeSubmit, afterSubmit, or at custom-defined events tailored to specific needs.\n  - Each hook entry typically includes the event name, the callback function to execute, and optional parameters or context information.\n  - Supports both synchronous and asynchronous execution modes depending on the hook type and implementation context.\n  - Hooks execute in the order they are defined, allowing for controlled sequencing of custom logic.\n  - Hooks can modify input data, perform validations, log information, or alter output responses as needed.\n\n  **Implementation guidance**\n  - Ensure that each hook function is properly defined, accessible, and tested within the RESTlet script context to avoid runtime failures.\n  - Validate hook event names against the list of supported lifecycle events to prevent misconfiguration and errors.\n  - Use hooks to encapsulate reusable business logic, enforce data integrity, or integrate with external systems and services.\n  - Implement robust error handling within hook functions to prevent exceptions from disrupting the main RESTlet processing flow.\n  - Document each hook’s purpose, expected inputs, outputs, and side effects clearly to facilitate maintainability and future enhancements.\n  - Consider performance implications of hooks, especially those performing asynchronous operations or external calls, to maintain RESTlet responsiveness.\n  - When multiple hooks are defined for the same event, design them to avoid conflicts and ensure predictable outcomes.\n\n  **Examples**\n  - Defining a hook to validate and sanitize input data before processing a RESTlet request.\n  - Adding a hook to log detailed request and response information after the RESTlet completes execution for auditing purposes.\n  - Using a hook to modify or enrich the response payload dynamically before it is returned to the client application.\n  - Implementing a hook to trigger notifications or update related records asynchronously after data submission.\n  - Creating a custom hook event to perform additional security checks beyond standard validation.\n\n  **Important notes**\n  - Improper use or misconfiguration of hooks can lead to unexpected behavior, performance degradation, or runtime errors."},"cLocked":{"type":"object","description":"cLocked indicates whether the record is locked, preventing any modifications to its data.\n**Field behavior**\n- Represents the lock status of a record within the system.\n- When set to true, the record is locked and cannot be edited or updated.\n- When set to false, the record is unlocked and available for modifications.\n- Typically used to control concurrent access and maintain data integrity.\n**Implementation guidance**\n- Should be checked before performing update or delete operations on the record.\n- Setting this field to true should trigger UI or API restrictions on editing.\n- Ensure that only authorized users or processes can change the lock status.\n- Use this field to prevent race conditions or accidental data overwrites.\n**Examples**\n- cLocked: true — The record is locked and read-only.\n- cLocked: false — The record is unlocked and editable.\n**Important notes**\n- Locking a record does not necessarily prevent read access; it only restricts modifications.\n- The lock status may be temporary or permanent depending on business rules.\n- Changes to this field might require audit logging for compliance.\n**Dependency chain**\n- May depend on user permissions or roles to set or clear the lock.\n- Could be related to workflow states or approval processes that enforce locking.\n**Technical details**\n- Data type: Boolean.\n- Default value is typically false (unlocked).\n- Stored as a flag in the record metadata or status fields.\n- Changes to cLocked should be atomic to avoid inconsistent states."}},"description":"restlet: The identifier or URL of the NetSuite Restlet script to be invoked for performing custom server-side logic or data processing within the NetSuite environment. This property specifies which Restlet endpoint the integration or application should call to execute specific business logic, automate workflows, or retrieve and manipulate data dynamically. It can be represented as an internal script ID, a relative URL path, or a full external URL depending on the integration scenario and access method.\n\n**Field behavior**\n- Defines the specific target Restlet script or endpoint for API calls within the NetSuite environment.\n- Routes requests to custom server-side scripts developed using NetSuite’s SuiteScript framework.\n- Enables execution of tailored business processes, data validations, transformations, or integrations.\n- Supports various HTTP methods such as GET, POST, PUT, and DELETE depending on the Restlet’s implementation.\n- Can be specified as a script ID, a relative URL path, or a fully qualified URL based on deployment and access context.\n- Acts as the primary entry point for invoking custom logic that extends or complements standard NetSuite functionality.\n\n**Implementation guidance**\n- Confirm that the Restlet script is properly deployed, enabled, and accessible within the target NetSuite account.\n- Use the internal script ID format (e.g., \"customscript_my_restlet\") when calling via SuiteScript or internal APIs.\n- Use the relative URL path (e.g., \"/app/site/hosting/restlet.nl?script=123&deploy=1\") or full URL for external integrations or REST clients.\n- Verify that the Restlet supports the required HTTP methods and handles input/output data formats correctly (JSON, XML, etc.).\n- Secure the Restlet endpoint by implementing authentication mechanisms such as OAuth 2.0, token-based authentication, or NetSuite session credentials.\n- Implement robust error handling and retry logic to manage scenarios where the Restlet is unavailable or returns errors.\n- Test the Restlet thoroughly in a non-production environment before deploying to production to ensure expected behavior and security compliance.\n\n**Examples**\n- \"customscript_my_restlet\" (internal script ID used in SuiteScript calls)\n- \"/app/site/hosting/restlet.nl?script=123&deploy=1\" (relative URL for REST calls within NetSuite)\n- \"https://rest.netsuite.com/app/site/hosting/restlet.nl?script=456&deploy=2\" (full external URL for third-party integrations)\n- \"customscript_sales_order_processor\" (a Rest"},"distributed":{"type":"object","properties":{"recordType":{"type":"string","description":"The lowercase script ID of the NetSuite record type for the distributed export.\n\nMust be the exact lowercase script ID as defined in NetSuite (e.g., \"customer\", \"salesorder\", \"invoice\", \"vendorbill\").\nThis is NOT the display name - use the script ID which is always lowercase with no spaces.\n\n**Examples**\n- \"customer\"\n- \"invoice\"\n- \"salesorder\"\n- \"itemfulfillment\"\n- \"vendorbill\"\n- \"employee\"\n- \"purchaseorder\"\n- \"creditmemo\"\n\n**Important notes**\n- Must be lowercase script ID, not the display name\n- Custom record types use format \"customrecord_scriptid\""},"executionContext":{"type":"array","description":"An array of execution contexts that will trigger this distributed export.\n\nSpecifies which NetSuite execution contexts should trigger this export. When a record change occurs in one of the specified contexts, the export will be triggered.\n\n**Default value**\nIf not specified, defaults to: [\"userinterface\", \"webstore\"]\n\n**Valid values**\n- \"userinterface\" - User interactions in the NetSuite UI\n- \"webservices\" - SOAP web services calls\n- \"csvimport\" - CSV import operations\n- \"offlineclient\" - Offline client synchronization\n- \"portlet\" - Portlet interactions\n- \"scheduled\" - Scheduled script executions\n- \"suitelet\" - Suitelet executions\n- \"custommassupdate\" - Custom mass update operations\n- \"workflow\" - Workflow actions\n- \"webstore\" - Web store transactions\n- \"userevent\" - User event script triggers\n- \"mapreduce\" - Map/Reduce script operations\n- \"restlet\" - RESTlet API calls\n- \"webapplication\" - Web application interactions\n- \"restwebservices\" - REST web services calls\n\n**Example**\n```json\n[\"userinterface\", \"webstore\"]\n```","default":["userinterface","webstore"],"items":{"type":"string","enum":["userinterface","webservices","csvimport","offlineclient","portlet","scheduled","suitelet","custommassupdate","workflow","webstore","userevent","mapreduce","restlet","webapplication","restwebservices"]}},"disabled":{"type":"boolean","description":"disabled: Indicates whether the distributed feature in NetSuite is disabled or not. This boolean flag controls the availability and operational status of the distributed functionalities within the NetSuite integration, allowing administrators or systems to enable or disable these features as needed.\n\n**Field behavior**\n- When set to true, the distributed feature is fully disabled, preventing any distributed operations or workflows from executing.\n- When set to false or omitted, the distributed feature remains enabled and fully operational.\n- Acts as a toggle switch to control the accessibility of distributed capabilities within the NetSuite environment.\n- Changes to this flag directly influence the behavior of distributed-related processes and integrations.\n\n**Implementation guidance**\n- Use a boolean value: `true` to disable the distributed feature, `false` to enable it.\n- Before disabling, verify that no critical processes depend on distributed functionality to avoid disruptions.\n- Implement validation checks to confirm the current state before initiating distributed operations.\n- Provide clear user notifications or system logs when the feature is disabled to aid in troubleshooting and auditing.\n- Consider the impact on dependent modules and ensure coordinated updates if disabling this feature.\n\n**Examples**\n- `disabled: true`  # The distributed feature is turned off, disabling all related operations.\n- `disabled: false` # The distributed feature is active and available for use.\n- Omitted `disabled` property defaults to `false`, enabling the feature by default.\n\n**Important notes**\n- Disabling this feature may interrupt workflows or processes that rely on distributed capabilities, potentially causing failures or delays.\n- Some systems may require a restart or reinitialization after changing this setting for the change to take full effect.\n- Modifying this property should be restricted to users with appropriate permissions to prevent unauthorized disruptions.\n- Always assess the broader impact on the NetSuite integration before toggling this flag.\n\n**Dependency chain**\n- Directly affects modules and properties that rely on distributed functionality within the NetSuite integration.\n- Should be checked and respected by any API calls, workflows, or processes that involve distributed features.\n- May influence error handling and fallback mechanisms in distributed-related operations.\n\n**Technical details**\n- Data type: Boolean\n- Default value: `false` (distributed feature enabled)\n- Located under the `netsuite.distributed` namespace in the API schema\n- Changing this property triggers state changes in distributed feature availability within the system"},"executionType":{"type":"array","description":"An array of record operation types that will trigger this distributed export.\n\nSpecifies which types of record operations should trigger the export. When a record operation matches one of the specified types, the export will be triggered.\n\n**Default value**\nIf not specified, defaults to: [\"create\", \"edit\", \"xedit\"]\n\n**Valid values**\n- \"create\" - New record creation\n- \"edit\" - Record editing via UI\n- \"delete\" - Record deletion\n- \"xedit\" - Inline editing (edit without opening the record)\n- \"copy\" - Record copy operation\n- \"view\" - Record view\n- \"cancel\" - Transaction cancellation\n- \"approve\" - Approval action\n- \"reject\" - Rejection action\n- \"pack\" - Pack operation (fulfillment)\n- \"ship\" - Ship operation (fulfillment)\n- \"markcomplete\" - Mark as complete\n- \"reassign\" - Reassignment action\n- \"editforecast\" - Forecast editing\n- \"dropship\" - Drop ship operation\n- \"specialorder\" - Special order operation\n- \"orderitems\" - Order items action\n- \"paybills\" - Pay bills action\n- \"print\" - Print action\n- \"email\" - Email action\n\n**Example**\n```json\n[\"create\", \"edit\", \"xedit\"]\n```","default":["create","edit","xedit"],"items":{"type":"string","enum":["create","edit","delete","xedit","copy","view","cancel","approve","reject","pack","ship","markcomplete","reassign","editforecast","dropship","specialorder","orderitems","paybills","print","email"]}},"qualifier":{"type":"object","description":"qualifier: A string value used to specify a particular qualifier or modifier that further defines, categorizes, or scopes the associated data within the NetSuite distributed context. This property enables more granular identification, filtering, and processing of data by applying specific criteria or attributes relevant to business logic, integration workflows, or operational requirements. It serves as an optional but powerful tool to distinguish data subsets, enhance data semantics, and support conditional handling in distributed NetSuite environments.\n\n**Field behavior**\n- Acts as an additional identifier or modifier to refine the meaning, scope, or classification of the associated data.\n- Enables filtering, categorization, or qualification of data entries in distributed NetSuite operations based on specific business rules.\n- Typically optional but may be mandatory in certain contexts or API endpoints where precise data segmentation is required.\n- Accepts string values that correspond to predefined, standardized, or custom qualifiers recognized by the system or integration layer.\n- Supports multiple use cases including regional segmentation, priority tagging, type classification, and channel identification.\n\n**Implementation guidance**\n- Ensure the qualifier value strictly aligns with the accepted set of qualifiers defined in the business domain, integration specifications, or system configuration.\n- Implement validation mechanisms to verify that the qualifier string matches allowed or expected values to avoid errors, misclassification, or unintended behavior.\n- Adopt consistent naming conventions and formatting standards (e.g., lowercase, hyphen-separated) for qualifiers to maintain clarity, readability, and interoperability across systems.\n- Maintain comprehensive documentation of all custom and standard qualifiers used, including their intended meaning and usage scenarios, to facilitate maintenance, troubleshooting, and future integrations.\n- Consider the impact of qualifiers on downstream processing, reporting, and analytics to ensure they are leveraged effectively and do not introduce ambiguity.\n\n**Examples**\n- \"region-us\" to specify data related to the United States region.\n- \"priority-high\" to indicate transactions or records with high priority status.\n- \"type-inventory\" to qualify records associated with inventory management.\n- \"channel-online\" to denote sales or operations conducted through online channels.\n- \"segment-enterprise\" to classify data pertaining to enterprise-level customers.\n- \"status-active\" to filter or identify active records within a dataset.\n\n**Important notes**\n- The qualifier should be meaningful, contextually relevant, and aligned with the business logic to ensure accurate data interpretation.\n- Incorrect, inconsistent, or ambiguous qualifiers can lead to data misinterpretation, processing errors, or integration failures.\n- The property may interact with other filtering"},"skipExportFieldId":{"type":"string","description":"skipExportFieldId is an identifier for a specific field within the NetSuite distributed configuration that determines whether certain data should be excluded from export processes. It serves as a control mechanism to selectively omit data associated with particular fields during export operations, enabling tailored and efficient data handling.\n\n**Field behavior:**  \n- Acts as a flag or marker to skip exporting data associated with the specified field ID.  \n- When set, the export routines will omit the data linked to this field from being included in the export payload.  \n- Helps control and customize the export behavior on a per-field basis within distributed NetSuite configurations.  \n- Does not affect data visibility or storage within NetSuite; it only influences export output.  \n- Supports multiple uses in scenarios where sensitive, redundant, or irrelevant data should be excluded from exports.\n\n**Implementation guidance:**  \n- Ensure the field ID provided corresponds to a valid and existing field within the NetSuite schema to prevent export errors.  \n- Use this property to optimize export operations by excluding unnecessary or sensitive data fields, improving performance and compliance.  \n- Validate the field ID format and existence before applying it to avoid runtime issues during export.  \n- Integrate with export logic to check this property before including fields in the export output, ensuring consistent behavior.  \n- Consider maintaining a centralized list or configuration of skipExportFieldIds for easier management and auditing.  \n\n**Examples:**  \n- skipExportFieldId: \"custbody_internal_notes\" (skips exporting the internal notes custom field)  \n- skipExportFieldId: \"item_custom_field_123\" (excludes a specific item custom field from export)  \n- skipExportFieldId: \"custentity_sensitive_data\" (prevents export of sensitive customer entity data)  \n\n**Important notes:**  \n- This property only affects export operations and does not alter data storage or visibility within NetSuite.  \n- Misconfiguration may lead to incomplete data exports if critical fields are skipped unintentionally, potentially impacting downstream processes.  \n- Should be used judiciously to maintain data integrity and compliance with business rules and regulatory requirements.  \n- Changes to this property should be documented and reviewed to avoid unintended data omissions.  \n\n**Dependency chain**\n- Depends on the existence and validity of the specified field ID within the NetSuite schema.  \n- Relies on export routines to check and respect this property during data export processes.  \n- May interact with other export configuration settings that control data inclusion/exclusion."},"hooks":{"type":"object","properties":{"preSend":{"type":"object","properties":{"fileInternalId":{"type":"string","description":"fileInternalId: The unique internal identifier assigned to a file within the NetSuite system. This identifier is essential for accurately referencing and manipulating a specific file during various operations such as retrieval, update, or deletion within NetSuite's environment. It acts as a primary key that ensures precise targeting of files in automated workflows, scripts, and API calls.\n\n**Field behavior**\n- Serves as a unique and immutable key to identify a file in the NetSuite file cabinet.\n- Utilized in pre-send hooks and other automation points to specify the exact file being processed or referenced.\n- Must correspond to an existing file’s internal ID within the NetSuite account to ensure valid operations.\n- Enables consistent and reliable file operations by linking actions directly to the file’s system-assigned identifier.\n\n**Implementation guidance**\n- Always ensure the value is a valid integer that corresponds to an existing file’s internal ID in NetSuite.\n- Validate the internal ID before performing any file operations to avoid runtime errors or failed transactions.\n- Use this ID when invoking NetSuite APIs, SuiteScript, or other integration points to fetch, update, or delete files.\n- Avoid hardcoding the internal ID; instead, dynamically retrieve it through queries or API calls to maintain adaptability and reduce maintenance overhead.\n- Handle exceptions gracefully when the ID does not correspond to any file, providing meaningful error messages or fallback logic.\n\n**Examples**\n- 12345\n- 987654\n- 1001\n\n**Important notes**\n- The internal ID is system-generated by NetSuite and guaranteed to be unique within the account.\n- This ID is distinct from file names, external URLs, or folder identifiers and should not be confused with them.\n- Using an incorrect or non-existent internal ID will cause operations to fail, potentially interrupting workflows.\n- The internal ID remains constant for the lifetime of the file and does not change even if the file is moved or renamed.\n\n**Dependency chain**\n- Depends on the file existing in the NetSuite file cabinet prior to referencing.\n- Often used alongside related properties such as file name, folder ID, file type, or metadata to provide context or additional filtering.\n- May be required input for downstream processes that manipulate or validate file contents.\n\n**Technical details**\n- Represented as an integer value assigned by NetSuite upon file creation.\n- Immutable once assigned; cannot be altered or reassigned to a different file.\n- Used internally by NetSuite APIs, SuiteScript, and integration"},"function":{"type":"string","description":"function: >\n  Specifies the name of the custom function to be executed as a pre-send hook within the NetSuite distributed system. This function is invoked immediately before sending data or requests, allowing for custom processing, validation, or modification of the payload to ensure data integrity and compliance with business rules.\n\n  **Field behavior**\n  - Defines the exact function to be called prior to sending data or requests.\n  - Enables interception, inspection, and manipulation of data before transmission.\n  - Supports integration of custom business logic, validation, enrichment, or logging steps.\n  - Must reference a valid, accessible function within the current execution context or environment.\n  - The function’s execution outcome can influence whether the sending process proceeds, is modified, or is aborted.\n\n  **Implementation guidance**\n  - Ensure the function name corresponds exactly to a defined function in the codebase, script environment, or registered hooks.\n  - The function should accept the expected input parameters (such as the payload or context) and return appropriate results or modifications.\n  - Implement robust error handling within the function to prevent unhandled exceptions that could disrupt the sending workflow.\n  - Document the function’s purpose, input/output contract, and side effects clearly for maintainability and future reference.\n  - Validate that the function executes efficiently and completes promptly to avoid introducing latency or blocking the sending process.\n  - If asynchronous operations are necessary, ensure they are properly awaited or handled to guarantee completion before sending.\n  - Follow consistent naming conventions aligned with the overall codebase or organizational standards.\n\n  **Examples**\n  - \"validateCustomerData\"\n  - \"sanitizePayloadBeforeSend\"\n  - \"logPreSendActivity\"\n  - \"customAuthorizationCheck\"\n  - \"enrichOrderDetails\"\n  - \"checkInventoryAvailability\"\n\n  **Important notes**\n  - The function must be synchronous or correctly handle asynchronous behavior to ensure it completes before the send operation proceeds.\n  - If the function throws an error or returns a failure state, it may block, modify, or abort the sending process depending on the implementation.\n  - Avoid performing long-running or blocking operations within the function to maintain system responsiveness.\n  - The function should not perform irreversible side effects unless explicitly intended, as it runs prior to data transmission.\n  - Ensure the function does not introduce security vulnerabilities, such as exposing sensitive data or allowing injection attacks.\n  - Consistent and clear error reporting within the function aids in troubleshooting and operational monitoring.\n\n  **Dependency**"},"configuration":{"type":"object","description":"configuration: >\n  Configuration settings for the preSend hook in the NetSuite distributed system.\n  This property defines the parameters and options that control the behavior and execution of the preSend hook, allowing customization of how data is processed before being sent.\n  It enables fine-tuning of operational aspects such as retries, timeouts, validation rules, logging, and payload constraints to ensure reliable and efficient data transmission.\n  **Field behavior**\n  - Specifies the customizable settings that dictate how the preSend hook operates.\n  - Controls data manipulation, validation, and preparation steps prior to sending.\n  - Can include flags, thresholds, retry policies, timeout durations, logging options, and other operational parameters.\n  - May be optional or mandatory depending on the specific implementation and requirements of the preSend hook.\n  - Supports nested configuration objects to allow detailed and structured settings.\n  **Implementation guidance**\n  - Define clear, well-documented configuration options that directly impact the preSend process.\n  - Validate all configuration values rigorously to ensure they conform to expected data types, ranges, and formats.\n  - Provide sensible default values for optional parameters to enhance usability and reduce configuration errors.\n  - Ensure backward compatibility when extending or modifying configuration options.\n  - Include comprehensive documentation for each configuration parameter, including its purpose, accepted values, and effect on hook behavior.\n  - Consider security implications when allowing configuration of headers or other sensitive parameters.\n  **Examples**\n  - `{ \"retryCount\": 3, \"timeout\": 5000, \"enableLogging\": true }`\n  - `{ \"validateSchema\": true, \"maxPayloadSize\": 1048576 }`\n  - `{ \"customHeaders\": { \"X-Custom-Header\": \"value\" } }`\n  - `{ \"retryPolicy\": { \"maxAttempts\": 5, \"backoffStrategy\": \"exponential\" }, \"enableLogging\": false }`\n  - `{ \"payloadCompression\": \"gzip\", \"timeout\": 10000 }`\n  **Important notes**\n  - Incorrect or invalid configuration values can cause the preSend hook to fail or behave unpredictably.\n  - Thorough testing of configuration changes in development or staging environments is critical before deploying to production.\n  - Some configuration changes may require restarting or reinitializing the hook or related services to take effect.\n  - Sensitive configuration parameters should be handled securely to prevent exposure of confidential information.\n  - Configuration should be version-controlled and documented to facilitate maintenance"}},"description":"preSend is a hook function that is invoked immediately before a request is sent to the NetSuite API. It allows for custom processing, modification, or validation of the request payload and headers, enabling dynamic adjustments or logging prior to transmission.\n\n**Field behavior**\n- Executed synchronously or asynchronously just before the API request is dispatched to the NetSuite endpoint.\n- Receives the full request object, including headers, body, query parameters, and other relevant metadata.\n- Permits modification of any part of the request, such as altering headers, adjusting the payload, or changing query parameters.\n- Supports validation logic to ensure the request meets required criteria; throwing an error will abort the request.\n- Enables injection of dynamic data like authentication tokens, custom headers, or correlation IDs.\n- Can be used for logging or auditing outgoing request details for debugging or monitoring purposes.\n\n**Implementation guidance**\n- Implement as a function or asynchronous callback that accepts the request context object.\n- Ensure that any asynchronous operations within the hook are properly awaited to maintain request integrity.\n- Keep processing lightweight to avoid introducing latency or blocking the request pipeline.\n- Handle exceptions carefully; unhandled errors will prevent the request from being sent.\n- Centralize request customization logic here to improve maintainability and reduce duplication.\n- Avoid side effects that could impact other parts of the system or subsequent requests.\n- Validate inputs thoroughly to prevent malformed requests from being sent to the API.\n\n**Examples**\n- Adding a Bearer token or API key to the Authorization header dynamically before sending.\n- Logging the complete request payload and headers for troubleshooting network issues.\n- Modifying request parameters based on user roles or feature flags at runtime.\n- Validating that required fields are present and correctly formatted, throwing an error if validation fails.\n- Adding a unique request ID header for tracing requests across distributed systems.\n\n**Important notes**\n- This hook executes on every outgoing request, so its performance impact should be minimized.\n- Any modifications made within preSend directly affect the final request sent to NetSuite.\n- Throwing an error inside this hook will abort the request and propagate the error upstream.\n- This hook is strictly for pre-request processing and should not be used for handling responses.\n- Avoid making network calls or heavy computations inside this hook to prevent delays.\n- Ensure thread safety if the hook accesses shared resources or global state.\n\n**Dependency chain**\n- Invoked after request construction but before the request is dispatched.\n- Precedes any network transmission or retry"}},"description":"hooks: >\n  A collection of user-defined functions or callbacks that are executed at specific points during the lifecycle of the distributed process within the NetSuite integration. These hooks enable customization and extension of the default behavior by injecting custom logic before, during, or after key operations, allowing for flexible adaptation to unique business requirements and integration scenarios.\n\n  **Field behavior**\n  - Contains one or more functions or callback references mapped to specific lifecycle events.\n  - Each hook corresponds to a distinct event or stage in the distributed process, such as pre-processing, post-processing, error handling, or data transformation.\n  - Hooks are invoked automatically by the system at predefined points in the workflow.\n  - Can modify data payloads, trigger additional workflows or external API calls, perform validations, or handle errors.\n  - Supports both synchronous and asynchronous execution models depending on the hook’s purpose and implementation.\n  - Execution order of hooks for the same event is deterministic and should be documented.\n  - Hooks should be designed to avoid side effects that could impact other parts of the process.\n\n  **Implementation guidance**\n  - Define hooks as named functions or references to executable code blocks compatible with the integration environment.\n  - Ensure hooks are idempotent to prevent unintended consequences from repeated or retried executions.\n  - Validate all inputs and outputs rigorously within hooks to maintain data integrity and system stability.\n  - Use hooks to integrate with external systems, perform custom validations, enrich data, or implement business-specific logic.\n  - Document each hook’s purpose, expected inputs, outputs, and any side effects clearly for maintainability.\n  - Implement robust error handling within hooks to gracefully manage exceptions without disrupting the main process flow.\n  - Test hooks thoroughly in isolated and integrated environments to ensure reliability and performance.\n  - Consider security implications, such as data exposure or injection risks, when implementing hooks.\n\n  **Examples**\n  - A hook that validates transaction data before it is sent to NetSuite to ensure compliance with business rules.\n  - A hook that logs detailed transaction metadata after a successful operation for auditing purposes.\n  - A hook that modifies or enriches payload data during transformation stages to align with NetSuite’s schema.\n  - A hook that triggers email or system notifications upon error occurrences to alert support teams.\n  - A hook that retries failed operations with exponential backoff to improve resilience.\n\n  **Important notes**\n  - Improperly implemented hooks can cause process failures, data inconsistencies, or performance degradation."},"sublists":{"type":"object","description":"sublists: >\n  A collection of related sublist objects associated with the main record, representing grouped sets of data entries that provide additional details or linked information within the NetSuite distributed record context. These sublists enable the organization of complex, hierarchical data by encapsulating related records or line items that belong to the primary record, facilitating detailed data management and interaction within the system.\n  **Field behavior**\n  - Contains multiple sublist entries, each representing a distinct group of related data tied to the main record.\n  - Organizes and structures complex record information into manageable, logically grouped sections.\n  - Supports nested and hierarchical data representation, allowing for detailed and granular record composition.\n  - Typically handled as arrays or lists of sublist objects, supporting iteration and manipulation.\n  - Reflects one-to-many relationships inherent in NetSuite records, such as line items or related entities.\n  **Implementation guidance**\n  - Ensure each sublist object strictly adheres to the defined schema and data types for its specific sublist type.\n  - Validate the consistency and referential integrity of sublist data in relation to the main record to prevent data anomalies.\n  - Design for dynamic handling of sublists, accommodating varying sizes including empty or large collections.\n  - Implement robust CRUD (Create, Read, Update, Delete) operations for sublist entries to maintain accurate and up-to-date data.\n  - Consider transactional integrity when modifying sublists to ensure changes are atomic and consistent.\n  **Examples**\n  - A sales order record containing sublists for item lines detailing products, quantities, and prices; shipping addresses specifying delivery locations; and payment schedules outlining installment plans.\n  - An employee record with sublists for dependents listing family members; employment history capturing previous roles and durations; and certifications documenting professional qualifications.\n  - A customer record including sublists for contacts with communication details; transactions recording purchase history; and communication logs tracking interactions and notes.\n  **Important notes**\n  - Sublists are critical for accurately modeling one-to-many relationships within NetSuite records, enabling detailed data capture and reporting.\n  - Modifications to sublists can trigger business logic, workflows, or validations that affect overall record processing.\n  - Maintaining synchronization between sublists and the main record is essential to preserve data integrity and prevent inconsistencies.\n  - Performance considerations should be taken into account when handling large sublists to optimize system responsiveness.\n  **Dependency chain**\n  - Depends on the main record schema"},"referencedFields":{"type":"object","description":"referencedFields: >\n  A list of field identifiers that are referenced within the current context, typically used to denote dependencies or relationships between fields in a NetSuite distributed environment. This property helps in mapping out how different fields interact or rely on each other, facilitating data integrity, validation, and synchronization across distributed components or services.\n  **Field behavior**\n  - Contains identifiers of fields that the current field or process depends on or interacts with.\n  - Used to establish explicit relationships or dependencies between multiple fields.\n  - Enables tracking of data flow and ensures consistency across distributed systems.\n  - Supports dynamic resolution of dependencies during runtime or configuration.\n  **Implementation guidance**\n  - Populate with valid and existing field identifiers as defined in the NetSuite schema or metadata.\n  - Verify that all referenced fields are accessible and correctly scoped within the current context.\n  - Use this property to manage dependencies critical for data validation, synchronization, or processing logic.\n  - Keep the list updated to reflect any schema changes to avoid broken references or inconsistencies.\n  - Avoid circular references by carefully managing dependencies between fields.\n  **Examples**\n  - [\"customerId\", \"orderDate\", \"shippingAddress\"]\n  - [\"invoiceNumber\", \"paymentStatus\"]\n  - [\"productCode\", \"inventoryLevel\", \"reorderThreshold\"]\n  **Important notes**\n  - Referenced fields must be unique within the list to prevent redundancy and confusion.\n  - Modifications to referenced fields can impact dependent processes; changes should be tested thoroughly.\n  - This property contains only the identifiers (names or keys) of fields, not their actual data or values.\n  - Proper documentation of referenced fields improves maintainability and clarity of dependencies.\n  **Dependency chain**\n  - Often linked with fields that require validation or data aggregation from other fields.\n  - May influence or be influenced by business rules, workflows, or automation scripts that depend on multiple fields.\n  - Changes in referenced fields can cascade to affect dependent fields or processes.\n  **Technical details**\n  - Data type: Array of strings.\n  - Each string represents a unique field identifier within the NetSuite distributed environment.\n  - The array should be serialized in a format compatible with the consuming system (e.g., JSON array).\n  - Maximum length and allowed characters for field identifiers should conform to NetSuite naming conventions."},"relatedLists":{"type":"object","description":"relatedLists: A collection of related list objects that represent associated records or entities linked to the primary record within the NetSuite distributed data model. These related lists provide contextual information and enable navigation to connected data, facilitating comprehensive data retrieval and management. Each related list encapsulates a set of records that share a defined relationship with the primary record, such as transactions, contacts, or custom entities, thereby supporting a holistic view of the data ecosystem.\n\n**Field behavior**\n- Contains multiple related list entries, each representing a distinct association to the primary record.\n- Enables retrieval of linked records such as transactions, custom records, subsidiary data, or other relevant entities.\n- Supports hierarchical or relational data structures by referencing related entities, allowing nested or multi-level associations.\n- Typically read-only in the context of distributed data retrieval but may support updates or synchronization depending on API capabilities and permissions.\n- May include metadata such as record counts, last updated timestamps, or status indicators for each related list.\n- Supports dynamic inclusion or exclusion based on user permissions, record type, and system configuration.\n\n**Implementation guidance**\n- Populate with relevant related list objects that are directly associated with the primary record, ensuring accurate representation of relationships.\n- Ensure each related list entry includes unique identifiers, descriptive metadata, and navigation links or references necessary for data access and traversal.\n- Maintain consistency in naming conventions, data structures, and field formats to align with NetSuite’s standard data model and API specifications.\n- Implement pagination, filtering, or sorting mechanisms to efficiently handle large sets of related records within each list.\n- Validate all references and links to ensure data integrity, preventing broken or stale connections within the distributed data environment.\n- Consider caching strategies or incremental updates to optimize performance when dealing with frequently accessed related lists.\n- Respect and enforce access control and permission checks to ensure users only see related lists they are authorized to access.\n\n**Examples**\n- A customer record’s relatedLists might include “Transactions” (e.g., sales orders, invoices), “Contacts” (associated individuals), and “Cases” (customer support tickets).\n- An invoice record’s relatedLists could contain “Payments” (payment records), “Shipments” (delivery details), and “Adjustments” (billing corrections).\n- A custom record type might have relatedLists such as “Attachments” (files linked to the record) or “Notes” (user comments or annotations).\n- A vendor record’s relatedLists may include “Purchase Orders,” “Bills,” and “Vendor Contacts"},"forceReload":{"type":"boolean","description":"forceReload: Indicates whether the system should forcibly reload the data or configuration, bypassing any cached or stored versions to ensure the most up-to-date information is used. This flag is critical in scenarios where data accuracy and freshness are paramount, such as after configuration changes or data updates that must be immediately reflected.\n\n**Field behavior:**\n- When set to true, the system bypasses all caches and reloads data or configurations directly from the primary source, ensuring the latest state is retrieved.\n- When set to false or omitted, the system may utilize cached or previously stored data to optimize performance and reduce load times.\n- Primarily used in contexts where stale data could lead to errors, inconsistencies, or outdated processing results.\n- The reload operation triggered by this flag typically involves invalidating caches and refreshing dependent components or services.\n\n**Implementation guidance:**\n- Use this flag judiciously to balance between data freshness and system performance, avoiding unnecessary reloads that could degrade responsiveness.\n- Ensure that enabling forceReload initiates a comprehensive refresh cycle, including clearing relevant caches and reinitializing configuration or data layers.\n- Implement robust error handling during the reload process to manage potential failures without causing system downtime or inconsistent states.\n- Monitor system resource utilization and response times when forceReload is active to identify and mitigate performance bottlenecks.\n- Document scenarios and triggers for using forceReload to guide developers and operators in its appropriate application.\n\n**Examples:**\n- forceReload: true  \n  (forces the system to bypass caches and reload data/configuration from the authoritative source immediately)\n- forceReload: false  \n  (allows the system to serve data from cache if available, improving response time)\n- forceReload omitted  \n  (defaults to false behavior, relying on cached data unless otherwise specified)\n\n**Important notes:**\n- Excessive or unnecessary use of forceReload can lead to increased latency, higher resource consumption, and potential service degradation.\n- This flag does not validate the correctness or integrity of the source data; it only ensures the latest available data is fetched.\n- Downstream systems or processes should be designed to handle the potential delays or transient states caused by forced reloads.\n- Coordination with cache invalidation policies and data synchronization mechanisms is essential to maintain overall system consistency.\n\n**Dependency chain:**\n- Relies on underlying cache management and invalidation frameworks to effectively bypass stored data.\n- Interacts with data retrieval modules, configuration loaders, and possibly distributed synchronization services.\n- May trigger logging, monitoring, or"},"ioEnvironment":{"type":"string","description":"ioEnvironment specifies the input/output environment configuration for the NetSuite distributed system, defining how data is handled, processed, and routed across different operational environments. This property determines the context in which I/O operations occur, influencing data flow, security protocols, performance characteristics, and consistency guarantees within the distributed architecture.\n\n**Field behavior**\n- Determines the operational context for all input/output processes within the distributed NetSuite system.\n- Influences how data is read from and written to various storage systems, message queues, or communication channels.\n- Affects performance tuning, security measures, and data consistency mechanisms based on the selected environment.\n- Typically set during system initialization or configuration phases and remains stable during runtime to ensure predictable behavior.\n- May trigger environment-specific logging, monitoring, and error-handling strategies.\n\n**Implementation guidance**\n- Validate the ioEnvironment value against a predefined set of supported environments such as \"development,\" \"staging,\" \"production,\" and any custom configurations.\n- Ensure that the selected environment is compatible with other system settings related to data handling, network communication, and security policies.\n- Implement robust error handling and fallback mechanisms to manage unsupported or invalid environment values gracefully.\n- Clearly document the operational implications, limitations, and recommended use cases for each environment option to guide system administrators and developers.\n- Coordinate environment settings across all distributed nodes to maintain consistency and prevent configuration drift.\n\n**Examples**\n- \"development\" — used for local testing and debugging with relaxed security and simplified data handling.\n- \"staging\" — a pre-production environment that closely mirrors production settings for validation and testing.\n- \"production\" — the live environment optimized for security, performance, and data integrity.\n- \"custom\" — user-defined environment configurations tailored for specialized I/O requirements or experimental setups.\n\n**Important notes**\n- Changing the ioEnvironment typically requires restarting services or reinitializing connections to apply new configurations.\n- The environment setting directly impacts data integrity, access controls, and compliance with security policies.\n- Sensitive data must be handled according to the security standards appropriate for the selected environment.\n- Consistency across all distributed nodes is critical; all nodes should be configured with compatible ioEnvironment values to avoid data inconsistencies or communication failures.\n- Misconfiguration can lead to degraded performance, security vulnerabilities, or data loss.\n\n**Dependency chain**\n- Depends on system initialization and configuration management components.\n- Interacts with data storage modules, network communication layers, and security frameworks.\n- Influences logging, monitoring, and error"},"ioDomain":{"type":"string","description":"ioDomain specifies the Internet domain name used for input/output operations within the distributed NetSuite environment. This domain is critical for routing data requests and responses between distributed components and services, ensuring seamless communication and integration across the system.\n\n**Field behavior**\n- Defines the domain name utilized for network communication in distributed NetSuite environments.\n- Serves as the base domain for constructing URLs for API calls, data synchronization, and service endpoints.\n- Must be a valid, fully qualified domain name (FQDN) adhering to DNS standards.\n- Typically remains consistent within a deployment environment but can differ across environments such as development, staging, and production.\n- Influences routing, load balancing, and failover mechanisms within distributed services.\n\n**Implementation guidance**\n- Verify that the domain is properly configured in DNS and is resolvable by all distributed components.\n- Validate the domain format against standard domain naming conventions (e.g., RFC 1035).\n- Ensure the domain supports secure communication protocols (e.g., HTTPS with valid SSL/TLS certificates).\n- Coordinate updates to ioDomain with network, security, and operations teams to maintain service continuity.\n- When migrating or scaling services, update ioDomain accordingly and propagate changes to all dependent components.\n- Monitor domain accessibility and performance to detect and resolve connectivity issues promptly.\n\n**Examples**\n- \"api.netsuite.com\"\n- \"distributed-services.companydomain.com\"\n- \"staging-netsuite.io.company.com\"\n- \"eu-west-1.api.netsuite.com\"\n- \"dev-networks.internal.company.com\"\n\n**Important notes**\n- Incorrect or misconfigured ioDomain values can cause failed network requests, service interruptions, and data synchronization errors.\n- The domain must support necessary security certificates to enable encrypted communication and protect data in transit.\n- Changes to ioDomain may necessitate updates to firewall rules, proxy configurations, and network security policies.\n- Consistency in ioDomain usage across distributed components is essential to avoid routing conflicts and authentication issues.\n- Consider the impact on caching, CDN configurations, and DNS propagation delays when changing ioDomain.\n\n**Dependency chain**\n- Dependent on underlying network infrastructure, DNS setup, and domain registration.\n- Utilized by distributed service components for constructing communication endpoints.\n- May affect authentication and authorization workflows that rely on domain validation or origin verification.\n- Interacts with security components such as SSL/TLS certificate management and firewall configurations.\n- Influences monitoring, logging, and troubleshooting processes related to network communication."},"lastSyncedDate":{"type":"string","format":"date-time","description":"lastSyncedDate represents the precise date and time when the data was last successfully synchronized between the system and NetSuite. This timestamp is essential for monitoring the freshness, consistency, and integrity of synchronized data, enabling systems to determine whether updates or incremental syncs are necessary.\n\n**Field behavior**\n- Captures the exact date and time of the most recent successful synchronization event.\n- Automatically updates only after a sync operation completes successfully without errors.\n- Serves as a reference point to assess if data is current or requires refreshing.\n- Typically stored and transmitted in ISO 8601 format to maintain uniformity across different systems and platforms.\n- Does not reflect the start time or duration of the synchronization process, only its successful completion.\n\n**Implementation guidance**\n- Record the timestamp in Coordinated Universal Time (UTC) to prevent timezone-related inconsistencies.\n- Update this field exclusively after confirming a successful synchronization to avoid misleading data states.\n- Validate the date and time format rigorously to comply with ISO 8601 standards (e.g., \"YYYY-MM-DDTHH:mm:ssZ\").\n- Utilize this timestamp to drive incremental synchronization logic, data refresh triggers, or audit trails in downstream workflows.\n- Handle cases where the field may be null or missing, indicating that no synchronization has occurred yet.\n\n**Examples**\n- \"2024-06-15T14:30:00Z\"\n- \"2023-12-01T08:45:22Z\"\n- \"2024-01-10T23:59:59Z\"\n\n**Important notes**\n- This timestamp marks the completion of synchronization, not its initiation.\n- Do not update this field if the synchronization process fails or is incomplete.\n- Maintaining timezone consistency (UTC) is critical to avoid synchronization conflicts or data mismatches.\n- The field may be null or omitted if synchronization has never been performed.\n- Systems relying on this field should implement fallback or error handling for missing or invalid timestamps.\n\n**Dependency chain**\n- Depends on successful completion of the synchronization process between the system and NetSuite.\n- Influences downstream processes such as incremental sync triggers, data validation, and audit logging.\n- May be referenced by monitoring or alerting systems to detect synchronization delays or failures.\n\n**Technical details**\n- Stored as a string in ISO 8601 format with UTC timezone designator (e.g., \"YYYY-MM-DDTHH:mm:ssZ\").\n- Should be generated programmatically at the moment synchronization completes successfully"},"settings":{"type":"object","description":"settings: >\n  Configuration settings specific to the distributed module within the NetSuite integration, enabling fine-grained control over distributed processing behavior and performance optimization.\n  **Field behavior**\n  - Encapsulates a collection of key-value pairs representing various configuration parameters that govern the distributed NetSuite integration’s operation.\n  - Includes toggles (boolean flags), numeric thresholds, timeouts, batch sizes, logging options, and other customizable settings relevant to distributed processing workflows.\n  - Typically optional for basic usage but essential for advanced customization, performance tuning, and adapting the integration to specific deployment environments.\n  - Changes to these settings can dynamically alter the integration’s behavior, such as retry logic, concurrency limits, and error handling strategies.\n  **Implementation guidance**\n  - Define each setting with a clear, descriptive key name and an appropriate data type (e.g., integer, boolean, string).\n  - Validate input values rigorously to ensure they fall within acceptable ranges or conform to expected formats to prevent runtime errors.\n  - Provide sensible default values for all settings to maintain stable and predictable integration behavior when explicit configuration is absent.\n  - Document each setting comprehensively, including its purpose, valid values, default, and impact on the integration’s operation.\n  - Consider versioning or schema validation to manage changes in settings structure over time.\n  - Ensure that sensitive information is either excluded or securely handled if included within settings.\n  **Examples**\n  - `{ \"retryCount\": 3, \"enableLogging\": true, \"timeoutSeconds\": 120 }` — configures retry attempts, enables detailed logging, and sets operation timeout.\n  - `{ \"batchSize\": 50, \"useNonProduction\": false }` — sets the number of records processed per batch and specifies production environment usage.\n  - `{ \"maxConcurrentJobs\": 10, \"errorThreshold\": 5, \"logLevel\": \"DEBUG\" }` — limits concurrent jobs, sets error tolerance, and defines logging verbosity.\n  **Important notes**\n  - Modifications to settings may require restarting or reinitializing the integration service to apply changes effectively.\n  - Incorrect or suboptimal configuration can cause integration failures, data inconsistencies, or degraded performance.\n  - Avoid storing sensitive credentials or secrets in settings unless encrypted or otherwise secured.\n  - Settings should be managed carefully in multi-environment deployments to prevent configuration drift.\n  **Dependency chain**\n  - Dependent on the overall NetSuite integration configuration and"},"useSS2Framework":{"type":"boolean","description":"useSS2Framework indicates whether to utilize the SuiteScript 2.0 framework for the NetSuite distributed configuration, enabling modern scripting capabilities and modular architecture within the NetSuite environment.\n\n**Field behavior**\n- Determines if the SuiteScript 2.0 (SS2) framework is enabled for the NetSuite integration.\n- When set to true, the system uses SS2 APIs, modular script definitions, and updated scripting conventions.\n- When set to false or omitted, the system defaults to using SuiteScript 1.0 or legacy frameworks.\n- Influences script loading mechanisms, module resolution, and API compatibility within NetSuite.\n- Impacts debugging, deployment, and maintenance processes due to differences in framework structure.\n\n**Implementation guidance**\n- Set this property to true to leverage modern SuiteScript 2.0 features such as improved modularity, asynchronous processing, and enhanced performance.\n- Verify that all custom scripts, modules, and third-party integrations are fully compatible with SuiteScript 2.0 before enabling this flag.\n- Conduct thorough testing in a non-production or development environment to identify potential issues arising from framework changes.\n- Use this flag to facilitate gradual migration from legacy SuiteScript 1.0 to SuiteScript 2.0, allowing toggling between frameworks during transition phases.\n- Update deployment pipelines and CI/CD processes to accommodate SuiteScript 2.0 packaging and module formats.\n\n**Examples**\n- `useSS2Framework: true` — Enables SuiteScript 2.0 framework usage, activating modern scripting features.\n- `useSS2Framework: false` — Disables SuiteScript 2.0, falling back to legacy SuiteScript 1.0 framework.\n- Property omitted — Defaults to legacy SuiteScript framework (typically 1.0), maintaining backward compatibility.\n\n**Important notes**\n- Enabling the SS2 framework may require refactoring existing scripts to comply with SuiteScript 2.0 syntax, including the use of define/require for module loading.\n- Some legacy APIs, global objects, and modules available in SuiteScript 1.0 may be deprecated or behave differently in SuiteScript 2.0.\n- Performance improvements and new features in SS2 may not be realized if scripts are not properly adapted.\n- Ensure that all scheduled scripts, workflows, and integrations are reviewed for compatibility to prevent runtime errors.\n- Documentation and developer training may be necessary to fully leverage SuiteScript 2.0 capabilities.\n\n**Dependency chain**\n- Dep"},"frameworkVersion":{"type":"object","properties":{"type":{"type":"string","description":"type: The type property specifies the category or classification of the framework version within the NetSuite distributed system. It defines the nature or role of the framework version, such as whether it is a major release, minor update, patch, or experimental build. This classification is essential for managing version control, deployment strategies, and compatibility assessments across the distributed environment.\n\n**Field behavior**\n- Determines how the framework version is identified, categorized, and processed within the system.\n- Influences compatibility checks, update mechanisms, and deployment workflows.\n- Enables filtering, sorting, and selection of framework versions based on their type.\n- Affects automated decision-making processes such as rollback, promotion, or deprecation of versions.\n\n**Implementation guidance**\n- Use standardized, predefined values or enumerations to represent different types (e.g., \"major\", \"minor\", \"patch\", \"experimental\") to ensure consistency.\n- Maintain uniform naming conventions and case sensitivity across all framework versions.\n- Implement validation logic to restrict the type property to allowed categories, preventing invalid or unsupported entries.\n- Document any custom or extended types clearly to avoid ambiguity.\n- Ensure that changes to the type property trigger appropriate notifications or logging for audit purposes.\n\n**Examples**\n- \"major\" — indicating a significant release that introduces new features or breaking changes.\n- \"minor\" — representing smaller updates that add enhancements or non-breaking improvements.\n- \"patch\" — for releases focused on bug fixes, security patches, or minor corrections.\n- \"experimental\" — denoting versions under testing, development, or not intended for production use.\n- \"deprecated\" — marking versions that are no longer supported or recommended for use.\n\n**Important notes**\n- The type value directly impacts deployment strategies, including automated rollouts and rollback procedures.\n- Accurate and consistent typing is critical for automated update systems and dependency management tools to function correctly.\n- Changes to the type property should be documented thoroughly and communicated to all relevant stakeholders to avoid confusion.\n- Misclassification can lead to improper handling of versions, potentially causing system instability or incompatibility.\n- The property should be reviewed regularly to align with evolving release management policies.\n\n**Dependency chain**\n- Depends on the version numbering scheme and release management policies defined in the frameworkVersion.\n- Interacts with deployment, update, and compatibility modules that rely on the type classification to determine appropriate actions.\n- Influences compatibility checks with other components and services within the NetSuite distributed environment.\n- May affect logging, monitoring, and alert"},"enum":{"type":"array","items":{"type":"object"},"description":"A list of predefined string values that represent the allowed versions of the framework in the NetSuite distributed environment. This enumeration restricts the frameworkVersion property to accept only specific, valid version identifiers, ensuring consistency and preventing invalid version usage across configurations and API interactions.\n\n**Field behavior**\n- Defines the complete set of permissible values for the frameworkVersion property.\n- Enforces validation by restricting inputs to only those versions listed in the enum.\n- Facilitates consistent version management across different components and services.\n- Typically implemented as an array of strings, where each string corresponds to a valid framework version identifier.\n- Serves as a source of truth for supported framework versions in the system.\n\n**Implementation guidance**\n- Populate the enum with all currently supported framework version strings, reflecting official releases.\n- Regularly update the enum to add new versions and deprecate obsolete ones in alignment with release cycles.\n- Use the enum to validate user inputs, API requests, and configuration files to prevent invalid or unsupported versions.\n- Integrate the enum values into UI elements such as dropdown menus or selection lists to guide users in choosing valid versions.\n- Ensure synchronization between the enum values and the system’s version recognition logic to avoid discrepancies.\n- Document any changes to the enum clearly to inform developers and users about version support updates.\n\n**Examples**\n- [\"1.0.0\", \"1.1.0\", \"2.0.0\"]\n- [\"v2023.1\", \"v2023.2\", \"v2024.1\"]\n- [\"stable\", \"beta\", \"alpha\"]\n- [\"release-2023Q2\", \"release-2023Q3\", \"release-2024Q1\"]\n\n**Important notes**\n- Enum values must exactly match the version identifiers recognized by the system, including case sensitivity.\n- Modifications to the enum (adding/removing versions) should be performed carefully to maintain backward compatibility.\n- The enum itself does not specify a default version; default version handling should be managed separately in the system.\n- Consistency in formatting and naming conventions of version strings within the enum is critical to avoid confusion.\n- The enum should be treated as authoritative for validation purposes and not overridden by external inputs.\n\n**Dependency chain**\n- Used by the frameworkVersion property to restrict allowed values.\n- Relied upon by validation logic in APIs and configuration parsers.\n- Integrated with UI components for version selection.\n- Maintained in coordination with the system’s version management"},"lowercase":{"type":"object","description":"lowercase: Specifies whether the framework version string should be converted to lowercase characters to ensure consistent casing across outputs and integrations.\n\n**Field behavior**\n- When set to `true`, the framework version string is converted entirely to lowercase characters.\n- When set to `false` or omitted, the framework version string retains its original casing as provided.\n- Influences how the framework version is displayed in logs, API responses, configuration files, or any output where the version string is used.\n- Does not modify the content or structure of the version string, only its letter casing.\n\n**Implementation guidance**\n- Accept only boolean values (`true` or `false`) for this property.\n- Perform the lowercase transformation after the framework version string is generated or retrieved but before it is output, stored, or transmitted.\n- Default behavior should be to preserve the original casing if this property is not explicitly set.\n- Use this property to maintain consistency in environments where case sensitivity affects processing or comparison of version strings.\n- Ensure that any caching or storage mechanisms reflect the transformed casing if this property is enabled.\n\n**Examples**\n- `true` — The framework version `\"V1.2.3\"` becomes `\"v1.2.3\"`.\n- `true` — The framework version `\"v1.2.3\"` remains `\"v1.2.3\"` (already lowercase).\n- `false` — The framework version `\"V1.2.3\"` remains `\"V1.2.3\"`.\n- Property omitted — The framework version string is output exactly as originally provided.\n\n**Important notes**\n- This property only affects letter casing; it does not alter the version string’s format, numeric values, or other characters.\n- Downstream systems or integrations that consume the version string should be verified to handle the casing appropriately.\n- Changing the casing may impact string equality checks or version comparisons if those are case-sensitive.\n- Consider the implications on logging, monitoring, or auditing systems that may rely on exact version string matches.\n\n**Dependency chain**\n- Depends on the presence of a valid framework version string to apply the transformation.\n- Should be evaluated after the framework version is fully constructed or retrieved.\n- May interact with other formatting or normalization properties related to the framework version.\n\n**Technical details**\n- Implemented as a boolean flag within the `frameworkVersion` configuration object.\n- Transformation typically uses standard string lowercase functions provided by the programming environment.\n- Should be applied consistently across"}},"description":"frameworkVersion: The specific version identifier of the software framework used within the NetSuite distributed environment. This version string is essential for tracking the exact iteration of the framework deployed, performing compatibility checks between distributed components, and ensuring consistency across the system. It typically follows semantic versioning or a similar structured versioning scheme to convey major, minor, and patch-level changes, including pre-release or build metadata when applicable.\n\n**Field behavior**\n- Represents the precise version of the software framework currently in use within the distributed environment.\n- Serves as a key reference for verifying compatibility between different components and services.\n- Facilitates debugging, support, and audit processes by clearly identifying the framework iteration.\n- Typically adheres to semantic versioning (e.g., MAJOR.MINOR.PATCH) or a comparable versioning format.\n- Remains stable and immutable once a deployment is finalized to ensure traceability.\n\n**Implementation guidance**\n- Maintain a consistent and standardized version string format, such as \"1.2.3\", \"v2.0.0\", or date-based versions like \"2024.06.01\".\n- Update this property promptly whenever the framework undergoes upgrades, patches, or significant changes.\n- Validate the version string against a predefined list of supported or recognized framework versions to prevent errors.\n- Integrate this property into deployment automation, monitoring, and logging tools to verify correct framework usage.\n- Avoid modifying the frameworkVersion post-deployment to maintain historical accuracy and supportability.\n\n**Examples**\n- \"1.0.0\"\n- \"2.3.5\"\n- \"v3.1.0-beta\"\n- \"2024.06.01\"\n- \"1.4.0-rc1\"\n\n**Important notes**\n- Accurate frameworkVersion values are critical to prevent compatibility issues and runtime failures in distributed systems.\n- Missing, incorrect, or inconsistent version identifiers can lead to deployment errors, integration problems, or difficult-to-trace bugs.\n- This property should be treated as a source of truth for framework versioning within the NetSuite distributed environment.\n- Coordination with other versioning properties (e.g., applicationVersion, apiVersion) is important for holistic version management.\n\n**Dependency chain**\n- Dependent on the overarching NetSuite distributed system versioning and release management strategy.\n- Closely related to other versioning properties such as applicationVersion and apiVersion for comprehensive compatibility checks.\n- Influences deployment pipelines, runtime environment validations, and compatibility enforcement"}},"description":"Indicates whether the transaction or record is distributed across multiple departments, locations, classes, or subsidiaries within the NetSuite system, allowing for detailed allocation of amounts or quantities for financial tracking and reporting purposes.\n\n**Field behavior**\n- Specifies if the transaction’s amounts or quantities are allocated across multiple organizational segments such as departments, locations, classes, or subsidiaries.\n- When set to `true`, the transaction supports detailed distribution, enabling granular financial analysis and reporting.\n- When set to `false` or omitted, the transaction is treated as assigned to a single segment without any distribution.\n- Influences how the transaction data is processed, posted, and reported within NetSuite’s financial modules.\n\n**Implementation guidance**\n- Use this boolean field to indicate whether a transaction involves distributed allocations.\n- When `distributed` is `true`, ensure that corresponding distribution details (e.g., department, location, class, subsidiary allocations) are provided in related fields to fully define the distribution.\n- Validate that the sum of all distributed amounts or quantities equals the total transaction amount to maintain data integrity.\n- Confirm that the specific NetSuite record type supports distribution before setting this field to `true`.\n- Handle this field carefully in integrations to avoid discrepancies in accounting or reporting.\n\n**Examples**\n- `distributed: true` — The transaction amounts are allocated across multiple departments and locations.\n- `distributed: false` — The transaction is assigned to a single department without any distribution.\n- Omitted `distributed` field — Defaults to non-distributed transaction behavior.\n\n**Important notes**\n- Enabling distribution (`distributed: true`) often requires additional detailed data to specify how amounts are allocated.\n- Not all transaction or record types in NetSuite support distribution; verify compatibility beforehand.\n- Incorrect or incomplete distribution data can lead to accounting errors or integration failures.\n- Distribution affects financial reporting and posting; ensure consistency across related fields.\n\n**Dependency chain**\n- Commonly used alongside fields specifying distribution details such as `department`, `location`, `class`, and `subsidiary`.\n- May impact related posting, reporting, and reconciliation processes within NetSuite’s financial modules.\n- Dependent on the transaction type’s capability to support distributed allocations.\n\n**Technical details**\n- Data type: Boolean (`true` or `false`).\n- Default behavior when omitted is typically `false` (non-distributed).\n- Must be synchronized with distribution detail records to ensure accurate financial data.\n- Changes to this field may trigger validation or"},"getList":{"type":"object","properties":{"type":{"type":"array","items":{"type":"object"},"description":"type: >\n  Specifies the category or classification of the records to be retrieved from the NetSuite system.\n  This property determines the type of entities that the getList operation will query and return.\n  It defines the scope of the data retrieval by indicating which NetSuite record type the API should target.\n  **Field behavior**\n  - Defines the specific record type to fetch, such as customers, transactions, or items.\n  - Influences the structure, fields, and format of the returned data based on the selected record type.\n  - Must be set to a valid NetSuite record type identifier recognized by the API.\n  - Directly impacts the filtering, sorting, and pagination capabilities available for the query.\n  **Implementation guidance**\n  - Use predefined constants or enumerations representing NetSuite record types to avoid errors and ensure consistency.\n  - Validate the type value before making the API call to confirm it corresponds to a supported and accessible record type.\n  - Consider user permissions and roles associated with the record type to ensure the API caller has appropriate access rights.\n  - Review NetSuite documentation for the exact record type identifiers and their expected behaviors.\n  - When possible, test with sample queries to verify the returned data matches expectations for the specified type.\n  **Examples**\n  - \"customer\"\n  - \"salesOrder\"\n  - \"inventoryItem\"\n  - \"employee\"\n  - \"vendor\"\n  - \"purchaseOrder\"\n  **Important notes**\n  - Incorrect or unsupported type values will result in API errors, empty responses, or unexpected data structures.\n  - The type property directly affects query performance, response size, and the complexity of the returned data.\n  - Some record types may require additional filters, parameters, or specific permissions to retrieve meaningful or complete data.\n  - Changes in NetSuite schema or API versions may introduce new record types or deprecate existing ones; keep the type values up to date.\n  **Dependency chain**\n  - The 'type' property is a required input for the NetSuite.getList operation.\n  - The value of 'type' determines the schema, fields, and structure of the records returned in the response.\n  - Other properties, filters, or parameters in the getList operation may depend on or vary according to the specified 'type'.\n  - Validation and error handling mechanisms rely on the correctness of the 'type' value.\n  **Technical details**\n  - Accepts string values corresponding"},"typeId":{"type":"string","description":"typeId: >\n  The unique identifier representing the specific type of record or entity to be retrieved in the NetSuite getList operation.\n  **Field behavior**\n  - Specifies the category or type of records to fetch from NetSuite.\n  - Determines the schema and fields available in the returned records.\n  - Must correspond to a valid NetSuite record type identifier.\n  **Implementation guidance**\n  - Use predefined NetSuite record type IDs as per NetSuite documentation.\n  - Validate the typeId before making the API call to avoid errors.\n  - Ensure the typeId aligns with the permissions and roles of the API user.\n  **Examples**\n  - \"customer\" for customer records.\n  - \"salesOrder\" for sales order records.\n  - \"employee\" for employee records.\n  **Important notes**\n  - Incorrect or unsupported typeId values will result in API errors.\n  - The typeId is case-sensitive and must match NetSuite's expected values.\n  - Changes in NetSuite's API or record types may affect valid typeId values.\n  **Dependency chain**\n  - Depends on the NetSuite record types supported by the account.\n  - Influences the structure and content of the getList response.\n  **Technical details**\n  - Typically a string value representing the internal NetSuite record type.\n  - Used as a parameter in the getList API endpoint to filter records.\n  - Must be URL-encoded if used in query parameters."},"internalId":{"type":"string","description":"Unique identifier assigned internally to an entity within the NetSuite system. This identifier is used to reference and retrieve specific records programmatically.\n\n**Field behavior**\n- Serves as the primary key for identifying records in NetSuite.\n- Used in API calls to fetch, update, or delete specific records.\n- Immutable once assigned to a record.\n- Typically a numeric or alphanumeric string.\n\n**Implementation guidance**\n- Must be provided when performing operations that require precise record identification.\n- Should be validated to ensure it corresponds to an existing record before use.\n- Avoid exposing internalId values in public-facing contexts to maintain data security.\n- Use in conjunction with other identifiers if necessary to ensure correct record targeting.\n\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"A1B2C3\"\n\n**Important notes**\n- internalId is unique within the scope of the record type.\n- Different record types may have overlapping internalId values; always confirm the record type context.\n- Not user-editable; assigned by NetSuite upon record creation.\n- Essential for batch operations where multiple records are processed by their internalIds.\n\n**Dependency chain**\n- Dependent on the record type being accessed or manipulated.\n- Used alongside other API parameters such as record type or externalId for comprehensive identification.\n\n**Technical details**\n- Data type: string (often numeric but can include alphanumeric characters).\n- Read-only from the API consumer perspective.\n- Returned in API responses when querying records.\n- Used as a key parameter in getList, get, update, and delete API operations."},"externalId":{"type":"string","description":"A unique identifier assigned to an entity or record by an external system, used to reference or synchronize data between systems.\n\n**Field behavior**\n- Serves as a unique key to identify records originating outside the current system.\n- Used to retrieve, update, or synchronize records with external systems.\n- Typically immutable once assigned to maintain consistent references.\n- May be optional or required depending on the integration context.\n\n**Implementation guidance**\n- Ensure the externalId is unique within the scope of the external system.\n- Validate the format and length according to the external system’s specifications.\n- Use this field to map or link records between the local system and external sources.\n- Handle cases where the externalId might be missing or duplicated gracefully.\n- Document the source system and context for the externalId to avoid ambiguity.\n\n**Examples**\n- \"INV-12345\" (Invoice number from an accounting system)\n- \"CRM-987654321\" (Customer ID from a CRM platform)\n- \"EXT-USER-001\" (User identifier from an external user management system)\n\n**Important notes**\n- The externalId is distinct from the internal system’s primary key or record ID.\n- Changes to the externalId can disrupt synchronization and should be avoided.\n- When integrating multiple external systems, ensure externalIds are namespaced or otherwise differentiated.\n- Not all records may have an externalId if they originate solely within the local system.\n\n**Dependency chain**\n- Often used in conjunction with other identifiers like internal IDs or system-specific keys.\n- May depend on authentication or authorization to access external system data.\n- Relies on consistent data synchronization processes to maintain accuracy.\n\n**Technical details**\n- Typically represented as a string data type.\n- May include alphanumeric characters, dashes, or underscores.\n- Should be indexed in databases for efficient lookup.\n- May require encoding or escaping if used in URLs or queries."},"_id":{"type":"object","description":"_id: The unique identifier for a record within the NetSuite system. This identifier is used to retrieve, update, or reference specific records in API operations.\n**Field behavior**\n- Serves as the primary key for records in NetSuite.\n- Must be unique within the context of the record type.\n- Used to fetch or manipulate specific records via API calls.\n- Immutable once assigned to a record.\n**Implementation guidance**\n- Ensure the _id is correctly captured from NetSuite responses when retrieving records.\n- Validate the _id format as per NetSuite’s specifications before using it in requests.\n- Use the _id to perform precise operations such as updates or deletions.\n- Handle cases where the _id may not be present or is invalid gracefully.\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"987654321\"\n**Important notes**\n- The _id is critical for identifying records uniquely; incorrect usage can lead to data inconsistencies.\n- Do not generate or alter _id values manually; always use those provided by NetSuite.\n- The _id is typically a numeric string but confirm with the specific NetSuite record type.\n**Dependency chain**\n- Dependent on the record type being accessed or manipulated.\n- Used in conjunction with other record fields for comprehensive data operations.\n**Technical details**\n- Typically represented as a string or numeric value.\n- Returned in API responses when listing or retrieving records.\n- Required in API requests for operations targeting specific records."}},"description":"getList: Retrieves a list of records from the NetSuite system based on specified criteria and parameters. This operation enables fetching multiple records in a single request, optimizing data retrieval and processing efficiency. It supports filtering, sorting, and pagination to manage large datasets effectively, and returns both the matching records and relevant metadata about the query results.\n\n**Field behavior**\n- Accepts parameters defining the record type to retrieve, along with optional filters, search criteria, and sorting options.\n- Returns a collection (list) of records that match the specified criteria.\n- Supports pagination by allowing clients to specify limits and offsets or use tokens to navigate through large result sets.\n- Includes metadata such as total record count, current page number, and page size to facilitate client-side data handling.\n- May return partial data if the dataset exceeds the maximum allowed records per request.\n- Handles cases where no records match the criteria by returning an empty list with appropriate metadata.\n\n**Implementation guidance**\n- Require explicit specification of the record type to ensure accurate data retrieval.\n- Validate and sanitize all input parameters (filters, sorting, pagination) to prevent malformed queries and optimize performance.\n- Implement robust pagination logic to allow clients to retrieve subsequent pages seamlessly.\n- Provide clear error messages and status codes for scenarios such as invalid parameters, unauthorized access, or record type not found.\n- Ensure compliance with NetSuite API rate limits and handle throttling gracefully.\n- Support common filter operators (e.g., equals, contains, greater than) consistent with NetSuite’s search capabilities.\n- Return consistent and well-structured response formats to facilitate client parsing and integration.\n\n**Examples**\n- Retrieving a list of customer records filtered by status (e.g., Active) and creation date range.\n- Fetching a batch of sales orders placed within a specific date range, sorted by order date descending.\n- Obtaining a paginated list of inventory items filtered by category and availability status.\n- Requesting the first 50 employee records with a specific job title, then fetching subsequent pages as needed.\n- Searching for vendor records containing a specific keyword in their name or description.\n\n**Important notes**\n- The maximum number of records returned per request is subject to NetSuite API limits, which may require multiple paginated requests for large datasets.\n- Proper authentication and authorization are mandatory to access the requested records; insufficient permissions will result in access errors.\n- The structure and fields of the returned records vary depending on the specified record type and the fields requested or defaulted"},"searchPreferences":{"type":"object","properties":{"bodyFieldsOnly":{"type":"boolean","description":"bodyFieldsOnly indicates whether the search results should include only the body fields of the records, excluding any joined or related record fields. This setting controls the scope of data returned by the search operation, allowing for more focused and efficient retrieval when only the main record's fields are necessary.\n\n**Field behavior**\n- When set to true, search results will include only the fields that belong directly to the main record (body fields), excluding any fields from joined or related records.\n- When set to false or omitted, search results may include fields from both the main record and any joined or related records specified in the search.\n- Directly affects the volume and detail of data returned, potentially reducing payload size and improving performance.\n- Influences how the search engine processes and compiles the result set, limiting it to primary record data when enabled.\n\n**Implementation guidance**\n- Use this property to optimize search performance and reduce data transfer when only the main record's fields are required.\n- Set to true to minimize payload size, which is beneficial for large datasets or bandwidth-sensitive environments.\n- Verify that your search criteria and downstream processing do not require any joined or related record fields before enabling this option.\n- If joined fields are necessary for your application logic, keep this property false or unset to ensure complete data retrieval.\n- Consider this setting in conjunction with other search preferences like pageSize and returnSearchColumns for optimal results.\n\n**Examples**\n- `bodyFieldsOnly: true` — returns only the main record’s body fields in search results, excluding any joined record fields.\n- `bodyFieldsOnly: false` — returns both body fields and fields from joined or related records as specified in the search.\n- Omitted `bodyFieldsOnly` property — defaults to false behavior, including joined fields if requested.\n\n**Important notes**\n- Enabling bodyFieldsOnly may omit critical related data if your search logic depends on joined fields, potentially impacting application functionality.\n- This setting is particularly useful for improving performance and reducing data size in scenarios where joined data is unnecessary.\n- Not all record types or search operations may support this preference; verify compatibility with your specific use case.\n- Changes to this setting can affect the structure and completeness of search results, so test thoroughly when modifying.\n\n**Dependency chain**\n- This property is part of the `searchPreferences` object within the NetSuite API request.\n- It influences the fields returned by the search operation, affecting both data scope and payload size.\n- May"},"pageSize":{"type":"number","description":"pageSize specifies the number of search results to be returned per page in a paginated search response. This property controls the size of each page of results when performing searches, enabling efficient handling and retrieval of large datasets by dividing them into manageable chunks. By adjusting pageSize, clients can balance between the volume of data received per request and the performance implications of processing large result sets.\n\n**Field behavior**\n- Determines the maximum number of records returned in a single page of search results.\n- Directly influences the pagination mechanism by setting how many items appear on each page.\n- Helps optimize network and client performance by limiting the amount of data transferred and processed per response.\n- Affects the total number of pages available, calculated based on the total number of search results divided by pageSize.\n- When pageSize is changed between requests, it may affect the consistency of pagination navigation.\n\n**Implementation guidance**\n- Assign pageSize a positive integer value that balances response payload size and system performance.\n- Ensure the value respects any minimum and maximum limits imposed by the API or backend system.\n- Maintain consistent pageSize values across paginated requests to provide predictable and stable navigation through result pages.\n- Consider client device capabilities, network bandwidth, and expected user interaction patterns when selecting pageSize.\n- Implement validation to prevent invalid or out-of-range values that could cause errors or degraded performance.\n- When dealing with very large datasets, consider smaller pageSize values to reduce memory consumption and improve responsiveness.\n\n**Examples**\n- pageSize: 25 — returns 25 search results per page, suitable for standard list views.\n- pageSize: 100 — returns 100 search results per page, useful for bulk data processing or export scenarios.\n- pageSize: 10 — returns 10 search results per page, ideal for quick previews or limited bandwidth environments.\n- pageSize: 50 — a moderate setting balancing data volume and performance for typical use cases.\n\n**Important notes**\n- Excessively large pageSize values can increase response times, memory usage, and may lead to timeouts or throttling.\n- Very small pageSize values can cause a high number of API calls, increasing overall latency and server load.\n- The API may enforce maximum allowable pageSize limits; requests exceeding these limits may result in errors or automatic truncation.\n- Changing pageSize mid-pagination can disrupt user experience by altering the number of pages and item offsets.\n- Some APIs may have default pageSize values if none is specified; explicitly setting page"},"returnSearchColumns":{"type":"boolean","description":"returnSearchColumns: Specifies whether the search operation should return the columns (fields) defined in the search results, providing detailed data for each record matching the search criteria.\n\n**Field behavior**\n- Determines if the search response includes the columns specified in the search definition, such as field values and metadata.\n- When set to true, the search results will contain detailed column data for each record, enabling comprehensive data retrieval.\n- When set to false, the search results will omit column data, potentially returning only record identifiers or minimal information.\n- Directly influences the amount of data returned, impacting response payload size and processing time.\n- Affects how client applications can utilize the search results, depending on the presence or absence of column data.\n\n**Implementation guidance**\n- Set to true when detailed search result data is required for processing, reporting, or display purposes.\n- Set to false to optimize performance and reduce bandwidth usage when only record IDs or minimal data are needed.\n- Use in conjunction with other search preference settings (e.g., `pageSize`, `returnSearchRows`) to fine-tune search responses.\n- Ensure client applications are designed to handle both scenarios—presence or absence of column data—to avoid errors or incomplete processing.\n- Consider the trade-off between data completeness and performance when configuring this property.\n\n**Examples**\n- `returnSearchColumns: true` — The search results will include all defined columns for each record, such as names, dates, and custom fields.\n- `returnSearchColumns: false` — The search results will exclude column data, returning only basic record information like internal IDs.\n\n**Important notes**\n- Enabling returnSearchColumns may significantly increase response size and processing time, especially for searches returning many records or columns.\n- Some search operations or API endpoints may require columns to be returned to function correctly or to provide meaningful results.\n- Disabling this option can improve performance but limits the detail available in search results, which may affect downstream processing or user interfaces.\n- Changes to this setting can impact caching, pagination, and sorting behaviors depending on the search implementation.\n\n**Dependency chain**\n- Related to other `searchPreferences` properties such as `pageSize` (controls number of records per page) and `returnSearchRows` (controls whether search rows are returned).\n- Works in tandem with search definition settings that specify which columns are included in the search.\n- May affect or be affected by API-level configurations or limitations on data retrieval and response formatting."}},"description":"searchPreferences: Preferences that control the behavior and parameters of search operations within the NetSuite environment, enabling customization of how search queries are executed and how results are returned to optimize relevance, performance, and user experience.\n\n**Field behavior**\n- Defines the execution parameters for search queries, including pagination, sorting, filtering, and result formatting.\n- Controls the scope, depth, and granularity of data retrieved during search operations.\n- Influences the performance, accuracy, and relevance of search results based on configured preferences.\n- Can be adjusted dynamically to tailor search behavior to specific user roles, contexts, or application requirements.\n- May include settings such as page size limits, sorting criteria, case sensitivity, and filter application.\n\n**Implementation guidance**\n- Utilize this property to fine-tune search operations to meet specific user or application needs, improving efficiency and relevance.\n- Validate all preference values against supported NetSuite search parameters to prevent errors or unexpected behavior.\n- Establish sensible default preferences to ensure consistent and predictable search results when explicit preferences are not provided.\n- Allow dynamic updates to preferences to adapt to changing contexts, such as different user roles or data volumes.\n- Ensure that preference configurations comply with user permissions and role-based access controls to maintain security and data integrity.\n\n**Examples**\n- Setting a page size of 50 to limit the number of records returned per search query for better performance.\n- Enabling case-insensitive search filters to broaden result matching.\n- Specifying sorting order by transaction date in descending order to show the most recent records first.\n- Applying filters to restrict search results to a particular customer segment or date range.\n- Configuring search to exclude inactive records to streamline results.\n\n**Important notes**\n- Misconfiguration of searchPreferences can lead to incomplete, irrelevant, or inefficient search results, negatively impacting user experience.\n- Certain preferences may be restricted or overridden based on user roles, permissions, or API version constraints.\n- Changes to searchPreferences can affect system performance; excessive page sizes or complex filters may increase load times.\n- Always verify compatibility of preference settings with the specific NetSuite API version and environment in use.\n- Consider the impact of preferences on downstream processes that consume search results.\n\n**Dependency chain**\n- Depends on the overall search operation configuration and the specific search type being performed.\n- Interacts with user authentication and authorization settings to enforce access controls on search results.\n- Influences and is influenced by data retrieval mechanisms and indexing strategies within NetSuite.\n- Works in conjunction with"},"file":{"type":"object","description":"Configuration for retrieving files from NetSuite file cabinet and PARSING them into records. Use this for structured file exports (CSV, XML, JSON) where the file content should be parsed into data records.\n\n**Critical:** When to use file vs blob\n- Use `netsuite.file` WITH export `type: null/undefined` for file exports WITH parsing (CSV, XML, JSON)\n- Use `netsuite.blob` WITH export `type: \"blob\"` for raw binary transfers WITHOUT parsing\n\nWhen you want file content to be parsed into individual records, use this `file` configuration and leave the export's `type` field as null or undefined (standard export). Do NOT set `type: \"blob\"` when using this configuration.","properties":{"folderInternalId":{"type":"string","description":"The internal ID of the NetSuite File Cabinet folder from which files will be exported.\n\nSpecify the internal ID for the NetSuite File Cabinet folder from which you want to export your files. If the folder internal ID is required to be dynamic based on the data you are integrating, you can specify the JSON path to the field in your data containing the folder internal ID values instead. For example, {{{myFileField.fileName}}}.\n\n**Field behavior**\n- Identifies the specific folder in NetSuite's file cabinet to export files from\n- Must be a valid internal ID that exists in the NetSuite environment\n- Supports dynamic values using handlebars notation for data-driven folder selection\n- The internal ID is distinct from folder names or paths; it's a stable numeric identifier\n\n**Implementation guidance**\n- Obtain the folderInternalId via NetSuite's UI (File Cabinet > folder properties) or API\n- For static exports, use the numeric internal ID directly (e.g., \"12345\")\n- For dynamic exports, use handlebars syntax to reference a field in your data\n- Verify folder permissions - the integration user must have access to the folder\n- The folderInternalId is distinct from folder names or folder paths; it is a stable numeric identifier used internally by NetSuite.\n- Using an incorrect or non-existent folderInternalId will result in errors or unintended file placement.\n- Folder permissions and user roles can impact the ability to perform operations even with a valid folderInternalId.\n- Folder hierarchy changes do not affect the folderInternalId, ensuring persistent reference integrity.\n\n**Dependency chain**\n- Depends on the existence of the folder within the NetSuite file cabinet.\n- Requires appropriate user permissions to access or modify the folder.\n- Often used in conjunction with file identifiers and other file metadata fields.\n- May be linked to folder creation or folder search operations to retrieve valid IDs.\n\n**Examples**\n- \"12345\" - Static folder internal ID\n- \"67890\" - Another valid folder internal ID\n- \"{{{record.folderId}}}\" - Dynamic folder ID from integration data\n- \"{{{myFileField.fileName}}}\" - Dynamic value from a field in your data\n\n**Important notes**\n- Using an incorrect or non-existent folderInternalId will result in errors\n- Folder hierarchy changes do not affect the folderInternalId\n- Internal IDs may differ between non-production and production environments"},"backupFolderInternalId":{"type":"string","description":"backupFolderInternalId is the internal identifier of the backup folder within the NetSuite file cabinet where backup files are stored. This ID uniquely identifies the folder location used for saving backup files programmatically, ensuring that backup operations target the correct directory within the NetSuite environment.\n\n**Field behavior**\n- Represents a unique internal ID assigned by NetSuite to a specific folder in the file cabinet.\n- Directs backup operations to the designated folder location for storing backup files.\n- Must correspond to an existing and accessible folder within the NetSuite file cabinet.\n- Typically handled as an integer value in API requests and responses.\n- Immutable for a given folder; changing the folder requires updating this ID accordingly.\n\n**Implementation guidance**\n- Verify that the folder with this internal ID exists before initiating backup operations.\n- Confirm that the folder has the necessary permissions to allow writing and managing backup files.\n- Use NetSuite SuiteScript APIs or REST API calls to retrieve and validate folder internal IDs dynamically.\n- Avoid hardcoding the internal ID; instead, use configuration files, environment variables, or administrative settings to maintain flexibility across environments.\n- Implement error handling to manage cases where the folder ID is invalid, missing, or inaccessible.\n- Consider environment-specific IDs for non-production versus production to prevent misdirected backups.\n\n**Examples**\n- 12345\n- 67890\n- 112233\n\n**Important notes**\n- The internal ID is unique per NetSuite account and environment; IDs do not transfer between non-production and production.\n- Deleting or renaming the folder associated with this ID will disrupt backup processes until updated.\n- Proper access rights and permissions are mandatory to write backup files to the specified folder.\n- Changes to folder structure or permissions should be coordinated with backup scheduling to avoid failures.\n- This property is critical for ensuring backup data integrity and recoverability within NetSuite.\n\n**Dependency chain**\n- Depends on the existence and accessibility of the folder in the NetSuite file cabinet.\n- Interacts with backup scheduling, file naming conventions, and storage management properties.\n- May be linked with authentication and authorization mechanisms controlling file cabinet access.\n- Relies on NetSuite API capabilities to manage and reference file cabinet folders.\n\n**Technical details**\n- Data type: Integer (typically a 32-bit integer).\n- Represents the internal NetSuite folder ID, not the folder name or path.\n- Used in API payloads to specify backup destination folder.\n- Must be retrieved or confirmed via NetSuite SuiteScript"}}}},"required":[]},"RDBMS":{"type":"object","description":"Configuration object for Relational Database Management System (RDBMS) data integration exports.\n\nThis object is REQUIRED when the _connectionId field references an RDBMS database connection\nand must not be included for other connection types. It defines how data is extracted\nfrom relational databases using SQL queries.\n\n**Rdbms export capabilities**\n- Execute custom SQL SELECT statements\n- Support for joins, aggregations, and functions\n- Flexible data retrieval from any accessible tables or views\n- Compatible with all major database systems\n\n**Critical:** WHAT BELONGS IN THIS OBJECT\n- `query` - **ALWAYS REQUIRED** - The SQL SELECT statement\n- `once` - **REQUIRED** when the export's Object Type is `\"once\"` (set _include_once: true)\n- **DO NOT** put `delta` inside this object - delta is handled via the query\n\n**Delta exports (type: \"delta\")**\nFor delta/incremental exports, do NOT populate a `delta` object inside `rdbms`.\nInstead, use `{{lastExportDateTime}}` or `{{currentExportDateTime}}` directly in the query:\n```json\n{\n  \"type\": \"delta\",\n  \"rdbms\": {\n    \"query\": \"SELECT * FROM customers WHERE updatedAt > {{lastExportDateTime}}\"\n  }\n}\n```\n\n**Once exports (type: \"once\")**\nFor once exports (mark records as processed), populate `rdbms.once.query`:\n```json\n{\n  \"type\": \"once\",\n  \"rdbms\": {\n    \"query\": \"SELECT * FROM customers WHERE exported = false\",\n    \"once\": {\n      \"query\": \"UPDATE customers SET exported = true WHERE id = {{record.id}}\"\n    }\n  }\n}\n```\n\n**Standard exports (type: null or not specified)**\nJust provide the query:\n```json\n{\n  \"rdbms\": {\n    \"query\": \"SELECT * FROM customers WHERE status = 'ACTIVE'\"\n  }\n}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - SQL SELECT query for retrieving data from the database.\n\nThis field contains the SQL SELECT statement that will be executed to fetch data\nfrom the database. The query can range from simple table selections to complex\njoins and aggregations.\n\nExamples:\n- Basic: `SELECT id, name, email FROM customers WHERE status = 'ACTIVE'`\n- Join: `SELECT o.id, c.name, o.amount FROM orders o JOIN customers c ON o.customer_id = c.id`\n- Aggregate: `SELECT category, COUNT(*) as count FROM orders GROUP BY category`\n- Parameterized: `SELECT * FROM orders WHERE customer_id = {{record.customer_id}}`\n"},"once":{"type":"object","description":"**CRITICAL: REQUIRED when the export's Object Type is \"once\".**\n\nIf Object Type is \"once\", you MUST set _include_once to true (or include this object).\n\nThis object has ONLY ONE property: \"query\" (a SQL UPDATE string).\nDO NOT create any other properties like \"update\", \"table\", \"set\", \"where\", etc.\n\nCORRECT format:\n```json\n{\"query\": \"UPDATE customers SET exported=true WHERE id={{record.id}}\"}\n```\n\nWRONG format (DO NOT DO THIS):\n```json\n{\"update\": {\"table\": \"customers\", \"set\": {...}}}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - A SQL UPDATE statement string to mark records as processed.\n\nThis is a plain SQL UPDATE query string, NOT a structured object.\n\nThe query runs AFTER each record is successfully exported, setting a flag\nto indicate the record has been processed.\n\nFormat: \"UPDATE <table> SET <column>=<value> WHERE <id_column>={{record.<id_field>}}\"\n\nExample: \"UPDATE customers SET exported=true WHERE id={{record.id}}\"\n\nThe {{record.id}} placeholder is replaced with the actual record ID from each exported row.\n"}}}}},"S3":{"type":"object","description":"Configuration object for Amazon S3 (Simple Storage Service) data integration exports.\n\nThis object is REQUIRED when the _connectionId field references an AWS S3 connection\nand must not be included for other connection types. It defines how files are retrieved\nfrom S3 buckets for processing in integrations.\n\nThe S3 export object has the following requirements:\n\n- Required fields: region, bucket\n- Optional fields: keyStartsWith, keyEndsWith, backupBucket, keyPrefix\n\n**Purpose**\n\nThis configuration specifies:\n- Which S3 bucket to retrieve files from\n- How to filter files by key patterns\n- Where to move files after retrieval (optional)\n","properties":{"region":{"type":"string","description":"The AWS region where the S3 bucket is located.\n\n- REQUIRED for all S3 exports\n- Must be a valid AWS region identifier (e.g., us-east-1, eu-west-1)\n- Case-insensitive (will be normalized to lowercase)\n"},"bucket":{"type":"string","description":"The S3 bucket name to retrieve files from.\n\n- REQUIRED for all S3 exports\n- Must be a valid existing S3 bucket name\n- Globally unique across all AWS accounts\n- AWS credentials must have s3:ListBucket and s3:GetObject permissions\n"},"keyStartsWith":{"type":"string","description":"Optional prefix filter for S3 object keys.\n\n- Filters files based on the beginning of their keys\n- Functions as a directory path in S3's flat storage structure\n- Case-sensitive (S3 keys are case-sensitive)\n- Examples:\n  - `\"exports/\"` - retrieves files in the exports \"directory\"\n  - `\"customer/orders/2023/\"` - retrieves files in this nested path\n  - `\"invoice_\"` - retrieves files starting with \"invoice_\"\n\nWhen used with keyEndsWith, files must match both criteria.\n"},"keyEndsWith":{"type":"string","description":"Optional suffix filter for S3 object keys.\n\n- Commonly used to filter by file extension\n- Case-sensitive (S3 keys are case-sensitive)\n- Examples:\n  - `\".csv\"` - retrieves only CSV files\n  - `\"_FINAL.xml\"` - retrieves only XML files with _FINAL suffix\n  - `\"_READY\"` - retrieves files with status indicator\n\nWhen used with keyStartsWith, files must match both criteria.\n"},"backupBucket":{"type":"string","description":"Optional destination bucket where files are moved before deletion.\n\n- If omitted, files are deleted from the source bucket after successful export\n- Must be a valid existing S3 bucket in the same region\n- AWS credentials must have s3:PutObject permissions on this bucket\n- Provides an independent backup of exported files\n\nIMPORTANT: Celigo automatically deletes files from the source bucket after\nsuccessful export. The backup bucket is for users who want to maintain their\nown independent backup of exported files. Celigo also maintains its own backup\nof processed files for a set period of time.\n"},"keyPrefix":{"type":"string","description":"Optional prefix to prepend to keys when moving to backup bucket.\n\n- Used only when backupBucket is specified\n- Prepended to the original filename when moved to backup\n- Can contain static text or handlebars templates\n- Examples:\n  - `\"processed/\"` - places files under a processed folder\n  - `\"archive/{{date 'YYYY-MM-DD'}}/\"` - organizes by date\n\nIMPORTANT: The original file's directory structure is not preserved. Only the\nfilename is appended to this prefix in the backup location.\n"}}},"Wrapper":{"type":"object","description":"Configuration for Wrapper exports","properties":{"function":{"type":"string","description":"Function name to invoke in the wrapper export"},"configuration":{"type":"object","description":"Wrapper-specific configuration payload","additionalProperties":true}}},"Parsers":{"type":"array","description":"Configuration for parsing XML payloads (i.e. files, HTTP responses, etc.). Use this field when you need to process XML data\nand transform it into a JSON records.\n\n**Implementation notes**\n\n- This is where you configure how to parse XML data in your resource\n- Although defined as an array, you typically only need a single parser configuration\n- Currently only XML parsing is supported\n- Only configure this field when working with XML data that needs structured parsing\n","items":{"type":"object","properties":{"version":{"type":"string","description":"Version identifier for the parser configuration format. Currently only version \"1\" is supported.\n\nAlways set this field to \"1\" as it's the only supported version at this time.\n","enum":["1"]},"type":{"type":"string","description":"Defines the type of parser to use. Currently only \"xml\" is supported.\n\nWhile the system is designed to potentially support multiple parser types in the future,\nat this time only XML parsing is implemented, so this field must be set to \"xml\".\n","enum":["xml"]},"name":{"type":"string","description":"Optional identifier for the parser configuration. This field is primarily for documentation\npurposes and is not functionally used by the system.\n\nThis field can be omitted in most cases as it's not required for parser functionality.\n"},"rules":{"type":"object","description":"Configuration rules that determine how XML data is parsed and converted to JSON.\nThese settings control the structure and format of the resulting JSON records.\n\n**Parsing options**\n\nThere are two main parsing strategies available:\n- **Automatic parsing**: Simple but produces more complex output\n- **Custom parsing**: More control over the resulting JSON structure\n","properties":{"V0_json":{"type":"boolean","description":"Controls the XML parsing strategy.\n\n- When set to **true** (Automatic): XML data is automatically converted to JSON without\n  additional configuration. This is simpler to set up but typically produces more complex\n  and deeply nested JSON that may be harder to work with.\n\n- When set to **false** (Custom): Gives you more control over how the XML is converted to JSON.\n  This requires additional configuration (like listNodes) but produces cleaner, more\n  predictable JSON output.\n\nMost implementations use the Custom approach (false) for better control over the output format.\n"},"listNodes":{"type":"array","description":"Specifies which XML nodes should be treated as arrays (lists) in the output JSON.\n\nIt's not always possible to automatically determine if an XML node should be a single value\nor an array. Use this field to explicitly identify nodes that should be treated as arrays,\neven if they appear only once in the XML.\n\nEach entry should be a simplified XPath expression pointing to the node that should be\ntreated as an array.\n\nOnly relevant when V0_json is set to false (Custom parsing).\n","items":{"type":"string"}},"includeNodes":{"type":"array","description":"Limits which XML nodes are included in the output JSON.\n\nFor large XML documents, you can use this field to extract only the nodes you need,\nreducing the size and complexity of the resulting JSON. Only nodes specified here\n(and their children) will be included in the output.\n\nEach entry should be a simplified XPath expression pointing to nodes to include.\n\nOnly relevant when V0_json is set to false (Custom parsing).\n","items":{"type":"string"}},"excludeNodes":{"type":"array","description":"Specifies which XML nodes should be excluded from the output JSON.\n\nSometimes it's easier to specify which nodes to exclude rather than which to include.\nUse this field to identify nodes that should be omitted from the output JSON.\n\nEach entry should be a simplified XPath expression pointing to nodes to exclude.\n\nOnly relevant when V0_json is set to false (Custom parsing).\n","items":{"type":"string"}},"stripNewLineChars":{"type":"boolean","description":"Controls whether newline characters are removed from text values.\n\nWhen set to true, all newline characters (\\n, \\r, etc.) will be removed from\ntext content in the XML before conversion to JSON.\n","default":false},"trimSpaces":{"type":"boolean","description":"Controls whether leading and trailing whitespace is trimmed from text values.\n\nWhen set to true, all values will have leading and trailing whitespace removed\nbefore conversion to JSON.\n","default":false},"attributePrefix":{"type":"string","description":"Specifies a character sequence to prepend to XML attribute names when converted to JSON properties.\n\nIn XML, both elements and attributes can exist at the same level, but in JSON this distinction is lost.\nTo maintain the distinction between element data and attribute data in the resulting JSON, this prefix\nis added to attribute names during conversion.\n\nFor example, with attributePrefix set to \"Att-\" and an XML element like:\n```xml\n<product id=\"123\">Laptop</product>\n```\n\nThe resulting JSON would be:\n```json\n{\n  \"product\": \"Laptop\",\n  \"Att-id\": \"123\"\n}\n```\n\nThis helps maintain the distinction between element content and attribute values in the\nconverted JSON, making it easier to reference specific data in downstream processing steps.\n"},"textNodeName":{"type":"string","description":"Specifies the property name to use for element text content when an element has both\ntext content and child elements or attributes.\n\nWhen an XML element contains both text content and other nested elements or attributes,\nthis field determines what property name will hold the text content in the resulting JSON.\n\nFor example, with textNodeName set to \"value\" and an XML element like:\n```xml\n<item id=\"123\">\n  Laptop\n  <category>Electronics</category>\n</item>\n```\n\nThe resulting JSON would be:\n```json\n{\n  \"item\": {\n    \"value\": \"Laptop\",\n    \"category\": \"Electronics\",\n    \"id\": \"123\"\n  }\n}\n```\n\nThis allows for unambiguous parsing of complex XML structures that mix text content with\nchild elements. Choose a name that's unlikely to conflict with actual element names in your XML.\n"}}}}}},"Mappings":{"type":"array","description":"Array of field mapping configurations for transforming data from one format into another.\n\n**Guidance**\n\nThis schema is designed around RECURSION as its core architectural principle. Understanding this recursive\nnature is essential for building effective mappings:\n\n1. The schema is self-referential by design - a mapping can contain nested mappings of the same structure\n2. Complex data structures (nested objects, arrays of objects, arrays of arrays of objects) are ALL\n   handled through this recursive pattern\n3. Each mapping handles one level of the data structure; deeper levels are handled by nested mappings\n\nWhen generating mappings programmatically:\n- For simple fields (string, number, boolean): Create single mapping objects\n- For objects: Create a parent mapping with nested 'mappings' array containing child field mappings\n- For arrays: Use 'buildArrayHelper' with extract paths defining array inputs and\n  recursive 'mappings' to define object structures\n\nThe system will process these nested structures recursively during runtime, ensuring proper construction\nof complex hierarchical data while maintaining excellent performance.\n","items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}}},"items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}},"Transform":{"type":"object","description":"Configuration for transforming data during processing operations. This object enables\nreshaping of records.\n\n**Transformation capabilities**\n\nCeligo's transformation engine offers powerful features for data manipulation:\n- Precise field mapping with JSONPath expressions\n- Support for any level of nested arrays\n- Formula-based field value generation\n- Dynamic references to flow and integration settings\n\n**Implementation approaches**\n\nThere are two distinct transformation mechanisms available:\n\n**Rule-Based Transformation (`type: \"expression\"`)**\n- **Best For**: Most transformation scenarios from simple to complex\n- **Capabilities**: Field mapping, formula calculations, lookups, nested data handling\n- **Advantages**: Visual configuration, no coding required, intuitive interface\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear mapping requirements or need to reshape data structure\n\n**Script-Based Transformation (`type: \"script\"`)**\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Capabilities**: Full programmatic control, custom processing, complex business rules\n- **Advantages**: Maximum flexibility, can implement any transformation logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Visual transformation tools aren't sufficient for your use case\n","properties":{"type":{"type":"string","description":"Determines which transformation mechanism to use. This choice affects which properties\nmust be configured and how transformation logic is implemented.\n\n**Available types**\n\n**Rule-Based Transformation (`\"expression\"`)**\n- **Required Config**: The `expression` object with mapping definitions\n- **Behavior**: Applies declarative rules to reshape data\n- **Best For**: Most transformation scenarios from simple to complex\n- **Advantages**: Visual configuration, no coding required\n\n**Script-Based Transformation (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to transform data\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard data transformations, use `\"expression\"`\n2. For complex logic or specialized processing, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based transformations. This object enables reshaping data\nwithout requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define transformation rules that\ncan map, modify, and generate data elements.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Field mappings define how input data is transformed to target fields\n- Formulas can be used to calculate or generate new values\n- Lookups can enrich data by fetching related information\n- Mode determines how records are processed (create new or modify existing)\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"2\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"2\"\nfor current implementations.\n","enum":["2"]},"rulesTwoDotZero":{"type":"object","description":"Configuration for version 2 transformation rules. This object contains the core logic\nfor how data is mapped, enriched, and transformed.\n\n**Capabilities**\n\nTransformation 2.0 provides:\n- Precise field mapping with JSONPath expressions\n- Support for deeply nested data structures\n- Formula-based field generation\n- Dynamic lookups for data enrichment\n- Multiple operating modes to fit different scenarios\n","properties":{"mode":{"type":"string","description":"Transformation mode that determines how records are handled during processing.\n\n**Available modes**\n\n**Create Mode (`\"create\"`)**\n- **Behavior**: Builds entirely new output records from inputs\n- **Use When**: Output structure differs significantly from input\n- **Advantage**: Clean slate approach, no field inheritance\n\n**Modify Mode (`\"modify\"`)**\n- **Behavior**: Makes targeted edits to existing records\n- **Use When**: Output structure should remain similar to input\n- **Advantage**: Preserves unmapped fields from the original record\n","enum":["create","modify"]},"mappings":{"$ref":"#/components/schemas/Mappings"},"lookups":{"allOf":[{"description":"Shared lookup tables used across all mappings defined in the transformation rules.\n\n**Purpose**\n\nLookups provide centralized value translation that can be referenced from any mapping\nin your transformation configuration. They enable consistent translation of codes, IDs,\nand values between systems without duplicating translation logic.\n\n**Usage in transformations**\n\nLookups are particularly valuable in transformations for:\n\n- **Data Normalization**: Standardizing values from diverse source systems\n- **Code Translation**: Converting between different coding systems (e.g., status codes)\n- **Field Enrichment**: Adding descriptive values based on ID or code lookups\n- **Cross-Reference Resolution**: Mapping identifiers between integrated systems\n\n**Implementation**\n\nLookups are defined once in this array and referenced by name in mappings:\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"statusMapping\",\n    \"map\": {\n      \"A\": \"Active\",\n      \"I\": \"Inactive\",\n      \"P\": \"Pending\"\n    },\n    \"default\": \"Unknown Status\"\n  }\n]\n```\n\nThen referenced in mappings using the lookupName property:\n\n```json\n{\n  \"generate\": \"status\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.statusCode\",\n  \"lookupName\": \"statusMapping\"\n}\n```\n\nThe system automatically applies the lookup during transformation processing.\n\nFor complete details on lookup properties and behavior, see the Lookups schema.\n"},{"$ref":"#/components/schemas/Lookups"}]}}}}},"script":{"type":"object","description":"Configuration for programmable script-based transformations. This object enables complex, custom\ntransformation logic beyond what expression-based transformations can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to transform data according to\nspecialized business rules or complex algorithms.\n\n**Implementation approach**\n\nScript-based transformation works by:\n1. Executing the specified function from the referenced script\n2. Passing input data to the function\n3. Using the function's return value as the transformed output\n\n**Common use cases**\n\nScript transformation is ideal for:\n- Complex business logic that can't be expressed through mappings\n- Algorithmic transformations requiring computation\n- Dynamic transformations based on external factors\n- Legacy system data format compatibility\n- Multi-stage processing with intermediate steps\n\nOnly use script-based transformation when expression-based transformation is insufficient.\nScript transformation requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the transformation logic.\n\nThe referenced script should contain the function specified in the\n'function' property.\n","format":"objectid"},"function":{"type":"string","description":"Name of the function within the script to execute for transformation. This function\nmust exist in the script referenced by _scriptId.\n"}}}}},"Lookups":{"type":"array","description":"Configuration for value-to-value transformations using lookup tables.\n\n**Purpose**\n\nLookups provide a way to translate values from one system to another. They transform\ninput values into output values using either static mapping tables or\ndynamic lookup caches.\n\n**Lookup mechanisms**\n\nThere are two distinct lookup mechanisms available:\n\n1. **Static Lookups**: Define a simple key-value map object and store it as part of your resource\n   - Best for: Small, fixed sets of values that rarely change\n   - Implementation: Configure the `map` object with input-to-output value mappings\n   - Example: Country codes, status values, simple translations\n\n2. **Dynamic Lookups**: Reference an existing 'Lookup Cache' resource in your Celigo account\n   - Best for: Large datasets, frequently changing values, or complex reference data\n   - Implementation: Configure `_lookupCacheId` to reference cached data maintained independently\n   - Example: Product catalogs, customer databases, pricing information\n\n**Property usage**\n\nThere are two mutually exclusive ways to configure lookups, depending on which mechanism you choose:\n\n1. **For Static Mappings**: Configure the `map` property with a direct key-value object\n   ```json\n   \"map\": {\"US\": \"United States\", \"CA\": \"Canada\"}\n   ```\n\n2. **For Dynamic Lookups**: Configure the following properties:\n   - `_lookupCacheId`: Reference to the lookup cache resource\n   - `extract`: JSON path to extract specific value from the returned lookup object\n\n**When to use**\n\nLookups are ideal for:\n\n1. **Value Translation**: Mapping codes or IDs to human-readable values\n\n2. **Data Enrichment**: Adding related information to records during processing\n\n3. **Normalization**: Ensuring consistent formatting of values across systems\n\n**Implementation details**\n\nLookups can be referenced in:\n\n1. **Field Mappings**: Direct use in field transformation configurations\n\n2. **Handlebars Templates**: Use within templates with the syntax:\n   ```\n   {{lookup 'lookupName' record.fieldName}}\n   ```\n\n**Example usage**\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"countryCodeToName\",\n    \"map\": {\n      \"US\": \"United States\",\n      \"CA\": \"Canada\",\n      \"UK\": \"United Kingdom\"\n    },\n    \"default\": \"Unknown Country\",\n    \"allowFailures\": true\n  },\n  {\n    \"name\": \"productDetails\",\n    \"_lookupCacheId\": \"60a2c4e6f321d800129a1a3c\",\n    \"extract\": \"$.details.price\",\n    \"allowFailures\": false\n  }\n]\n```\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique identifier for the lookup table within this configuration.\n\nThis name must be unique within the scope where the lookup is defined and is used to reference\nthe lookup in handlebars templates with the syntax {{lookup 'name' value}}.\n\nChoose descriptive names that indicate the transformation purpose, such as:\n- \"countryCodeToName\" for country code to full name conversion\n- \"statusMapping\" for status code translations\n- \"departmentCodes\" for department code to name mapping\n"},"map":{"type":"object","description":"The lookup mapping table as key-value pairs.\n\nThis object contains the input values as keys and their corresponding\noutput values. When a input value matches a key in this object,\nit will be replaced with the corresponding value.\n\nThe map should be kept to a reasonable size (typically under 100 entries)\nfor optimal performance. For larger mapping requirements, consider using\ndynamic lookups instead.\n\nMaps can include:\n- Simple code to name conversions: {\"US\": \"United States\"}\n- Status transformations: {\"A\": \"Active\", \"I\": \"Inactive\"}\n- ID to name mappings: {\"100\": \"Marketing\", \"200\": \"Sales\"}\n\nValues can be strings, numbers, or booleans, but all are stored as strings\nin the configuration.\n"},"_lookupCacheId":{"type":"string","description":"Reference to a LookupCache resource that contains the reference data for the lookup.\n\n**Purpose**\n\nThis field connects the lookup to an external data source that has been cached in the system.\nUnlike static lookups that use the `map` property, dynamic lookups can reference large datasets\nor frequently changing information without requiring constant updates to the integration.\n\n**Implementation details**\n\nThe LookupCache resource referenced by this ID contains:\n- The data records to be used as a reference source\n- Configuration for how the data should be indexed and accessed\n- Caching parameters to balance performance with data freshness\n\n**Usage patterns**\n\nCommonly used to reference:\n- Product catalogs or SKU databases\n- Customer or account information\n- Pricing tables or discount rules\n- Complex business logic lookup tables\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n","format":"objectid"},"extract":{"type":"string","description":"JSON path expression that extracts a specific value from the cached lookup object.\n\n**Purpose**\n\nWhen using dynamic lookups with a LookupCache, this JSON path identifies which field to extract\nfrom the cached object after it has been retrieved using the lookup key.\n\n**Implementation details**\n\n- Must use JSON path syntax (similar to mapping extract fields)\n- Operates on the cached object returned by the lookup operation\n- Examples:\n  - \"$.name\" - Extract the name field from the top level\n  - \"$.details.price\" - Extract a nested price field\n  - \"$.attributes[0].value\" - Extract a value from the first element of an array\n\n**Usage scenario**\n\nWhen a lookup cache contains complex objects:\n```json\n// Cache entry for key \"PROD-123\":\n{\n  \"id\": \"PROD-123\",\n  \"name\": \"Premium Widget\",\n  \"details\": {\n    \"price\": 99.99,\n    \"currency\": \"USD\",\n    \"inStock\": true\n  }\n}\n```\n\nSetting extract to \"$.details.price\" would return 99.99 as the lookup result.\n\nIf no extract is provided, the entire cached object is returned as the lookup result.\n"},"default":{"type":"string","description":"Default value to use when the source value is not found in the lookup map.\n\nThis value is used as a fallback when:\n1. The source value doesn't match any key in the map\n2. allowFailures is set to true\n\nSetting an appropriate default helps prevent flow failures due to unexpected\nvalues and provides predictable behavior for edge cases.\n\nCommon default patterns include:\n- Descriptive unknowns: \"Unknown Country\", \"Unspecified Status\"\n- Original value indicators: \"{Original Value}\", \"No mapping found\"\n- Neutral values: \"Other\", \"N/A\", \"Miscellaneous\"\n\nIf allowFailures is false and no default is specified, the flow will fail\nwhen encountering unmapped values.\n"},"allowFailures":{"type":"boolean","description":"When true, missing lookup values will use the default value rather than causing an error.\n\n**Behavior control**\n\nThis field determines how the system handles source values that don't exist in the map:\n\n- true: Use the default value for missing mappings and continue processing\n- false: Treat missing mappings as errors, failing the record\n\n**Recommendation**\n\nSet this to true when:\n- New source values might appear over time\n- Data quality issues could introduce unexpected values\n- Processing should continue even with imperfect mapping\n\nSet this to false when:\n- Complete data accuracy is critical\n- All possible source values are known and controlled\n- Missing mappings indicate serious data problems that should be addressed\n\nThe best practice is typically to set allowFailures to true with a meaningful\ndefault value, so flows remain operational while alerting you to missing mappings.\n"}}}},"Form":{"type":"object","description":"Configuration for creating user-friendly settings forms that make it easier for less technical users\nto configure integration resources.\n\n**Settings form builder**\n\nThe Settings Form Builder allows you to create or edit user-friendly fields that prompt for text entry\nor selections that will be returned as settings applied to this resource. Your forms can include any\nfield types that you see elsewhere in integrator.io, such as:\n\n- Text fields\n- Dropdown selections\n- Checkboxes\n- Radio buttons\n- Date pickers\n- Multi-select fields\n- Search fields\n\nForm fields make it much easier for less technical users to work with your integration settings by:\n\n- Providing clear labels and help text\n- Enforcing validation rules\n- Offering pre-defined selection options\n- Grouping related settings logically\n- Supporting conditional visibility\n- Creating a consistent user experience\n","properties":{"form":{"type":"object","description":"Configuration that defines the structure, fields, and behavior of the settings form.\n\nThis object contains the complete definition of the form's layout, fields, validation rules,\nand interactive behaviors. The specific structure depends on the form complexity and can include\nfield definitions, sections, conditional display logic, and default values.\n\nThe form configuration is typically created and managed through the visual Form Builder interface\nrather than edited directly as JSON.\n","properties":{"fieldMap":{"type":"object","description":"A mapping of field identifiers to their configuration objects.\nEach key in this object represents a unique field ID, and the value contains\nall the configuration settings for that specific form field.\n","additionalProperties":{"type":"object","description":"Configuration for an individual form field.\n","properties":{"id":{"type":"string","description":"Unique identifier for this field within the form.\nThis value typically matches the key in the fieldMap object.\n"},"name":{"type":"string","description":"Name of the field, used as the property name when generating the settings object\nfrom the submitted form data.\n"},"type":{"type":"string","description":"The type of form control to render for this field.\n","enum":["text","checkbox","radiogroup","relativeuri","editor","keyvalue","select","multiselect","toggle","datetime","date"]},"label":{"type":"string","description":"Display label shown next to the field in the form.\n"},"description":{"type":"string","description":"Detailed explanation text that appears below the field, providing more context\nthan the label or helpText.\n"},"helpText":{"type":"string","description":"Explanatory text that appears when hovering over the help icon next to the field.\nUsed to provide additional guidance on how to use the field.\n"},"required":{"type":"boolean","description":"When true, the field must have a value before the form can be submitted.\n","default":false},"multiline":{"type":"boolean","description":"For text fields, determines whether the input should be a multi-line text area\ninstead of a single-line input.\n","default":false},"rowsMax":{"type":"integer","description":"For multiline text fields, specifies the maximum number of visible rows.\n"},"inputType":{"type":"string","description":"For text fields, specifies the HTML input type attribute to apply additional\nvalidation or specialized input behavior.\n","enum":["text","number","email","password","tel","url"]},"delimiter":{"type":"string","description":"For text fields, specifies a character to use for splitting the input into an array.\nUsed for collecting multiple values in a single text field.\n"},"mode":{"type":"string","description":"For editor fields, specifies the type of content being edited for syntax highlighting.\n","enum":["json","xml","csv","text"]},"keyName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the key input.\n"},"valueName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the value input.\n"},"showDelete":{"type":"boolean","description":"For keyvalue fields, determines whether to show a delete button for each key-value pair.\n"},"doNotAllowFutureDates":{"type":"boolean","description":"For date and datetime fields, restricts selection to dates not in the future.\n"},"skipTimezoneConversion":{"type":"boolean","description":"For datetime fields, prevents automatic timezone conversion of the date value.\n"},"options":{"type":"array","description":"For fields that present choices (select, multiselect, radiogroup, toggle), defines\nthe available options.\n","items":{"oneOf":[{"title":"Option group","type":"object","properties":{"items":{"type":"array","items":{"oneOf":[{"title":"String value","type":"string"},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]},"description":"Array of option values/labels to display in the selection control.\n"}}},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]}},"visibleWhen":{"type":"array","description":"Conditional display rules that determine when this field should be visible.\nIf empty or not provided, the field is always visible.\n","items":{"type":"object","properties":{"field":{"type":"string","description":"The ID of another field whose value controls the visibility of this field.\n"},"is":{"type":"array","items":{"type":"string"},"description":"Array of values - if the referenced field has any of these values,\nthis field will be visible.\n"}}}}}}},"layout":{"type":"object","description":"Defines how the form fields are arranged and grouped in the UI.\nThe layout can organize fields into columns, sections, or other visual groupings.\n","properties":{"type":{"type":"string","description":"The type of layout to use for the form.\n","enum":["column"]},"containers":{"type":"array","description":"Array of container objects that group fields or contain nested containers.\nEach container can represent a column, box, indented section, or collapsible section.\n","items":{"type":"object","properties":{"type":{"type":"string","description":"The visual style of the container.\n","enum":["indent","box","collapse"]},"label":{"type":"string","description":"The heading text displayed for this container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this container.\nEach ID must correspond to a key in the fieldMap object.\n"},"containers":{"type":"array","description":"Nested containers within this container. Allows for hierarchical organization\nof fields with different visual styles.\n","items":{"type":"object","properties":{"label":{"type":"string","description":"The heading text displayed for this nested container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this nested container.\n"}}}}}}}}}},"additionalProperties":true},"init":{"type":"object","description":"Configuration for custom JavaScript initialization that executes when the form is first loaded.\n\nThis object defines a JavaScript hook that prepares the form for use, sets initial field values,\nperforms validation, or otherwise customizes the form behavior before it is displayed to the user.\n\n**Function signature**\n\nThe initialization function is invoked with a single 'options' argument containing contextual information:\n```javascript\nfunction formInit(options) {\n  // Process options and return the form object\n  return options.resource.settingsForm.form;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.resource` - The current resource being configured\n- `options.parentResource` - The parent of the current resource\n- `options.grandparentResource` - The grandparent of the current resource\n- `options.license` - For integration apps, the license provisioned to the integration\n- `options.parentLicense` - For integration apps, the parent of the license\n\n\n**Common uses**\n\n- Dynamically generate field options based on resource configuration\n- Pre-populate default values from related resources\n- Apply conditional logic that depends on resource properties\n- Add, remove, or modify form fields based on user permissions or account settings\n- Fetch external data to populate selection options\n- Implement complex validation rules that depend on resource context\n- Create branching form experiences based on user selections\n\n**Return value**\n\nThe function must return a valid form object that the UI can render.\nThrowing an exception will signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called when the form\nis initialized and should handle any custom setup logic.\n\nThe function must follow the expected signature and return a valid form object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the initialization function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}}}},"Settings":{"type":"object","description":"Configuration settings that can be accessed by hooks, filters, mappings and handlebars templates at runtime.\n\nIt enables customization of the resource's logic, allowing hooks, mappings, filters, and\nhandlebars to access and apply the settings at runtime.\n\n**Usage**\n\nThe settings object can store arbitrary JSON data that you want to save with the resource.\nWhile it's often populated through a form defined in the `settingsForm` field, you can also:\n\n- Directly provide JSON settings without using a form\n- Store configuration values used by hooks and templates\n- Create resource-specific constants and parameters\n- Maintain lookup tables or mapping structures\n- Define conditional logic parameters\n\n**Accessibility**\n\nSettings are available in:\n- All handlebars fields for building dynamic payloads\n- Field mapping expressions\n- JavaScript hooks via the options object\n- Filters and transformations\n\n**Best practices**\n\nFor non-technical users, create a custom form instead of editing the JSON directly.\nThis provides a user-friendly interface for updating settings without requiring JSON knowledge.\n","additionalProperties":true},"MockOutput":{"type":"object","description":"Sample data that simulates the output from an export for testing and configuration purposes.\n\nMock output allows you to configure and test flows without executing the actual export or\nwaiting for real-time data to arrive. This is particularly useful for:\n- Initial flow configuration and testing\n- Mapping development without requiring live data\n- Generating metadata for downstream flow steps\n- Creating realistic test scenarios\n- Documenting expected data structures\n\n**Structure**\n\nThe mock output must follow the integrator.io canonical format, which consists of a\n`page_of_records` array containing record objects. Each record object has a `record`\nproperty that contains the actual data fields.\n\n```json\n{\n  \"page_of_records\": [\n    {\n      \"record\": {\n        \"field1\": \"value1\",\n        \"field2\": \"value2\",\n        ...\n      }\n    },\n    ...\n  ]\n}\n```\n\n**Usage**\n\nWhen executing a test run or configuring a flow, integrator.io will use this mock output\ninstead of executing the export to retrieve live data. This allows you to:\n- Test mappings with representative data\n- Configure downstream flow steps without waiting for real data\n- Simulate various data scenarios\n\n**Limitations**\n\n- Maximum of 10 records\n- Maximum size of 1 MB\n- Must follow the canonical format shown above\n\nMock output can be populated automatically from preview data or entered manually.\n","properties":{"page_of_records":{"type":"array","description":"Array of record objects in the integrator.io canonical format.\n\nEach item in this array represents one record that would be processed\nby the flow during execution.\n","items":{"type":"object","properties":{"record":{"type":"object","description":"Container for the actual record data fields.\n\nThe structure of this object will vary depending on the specific\nexport configuration and the source system's data structure.\n","additionalProperties":true}}}}}},"PreSave":{"type":"object","description":"Defines a JavaScript hook that executes before the resource is saved.\n\nThis hook allows for programmatic validation, transformation, or enrichment of the\nresource itself before it is persisted. It can be used to enforce business rules,\nset derived properties, or implement cross-field validations that can't be expressed\nthrough the standard UI.\n\n**Function signature**\n\nThe preSave function is invoked with a single 'options' argument containing:\n```javascript\nfunction preSave(options) {\n  // Process options and return the modified resource\n  return options.newResource;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.newResource` - The resource being saved (with pending changes)\n- `options.oldResource` - The previous version of the resource (before changes)\n\n\n**Common uses**\n\n- Enforcing complex business rules across multiple fields\n- Automatically deriving field values based on other configuration\n- Performing validation that depends on external systems or data\n- Normalizing or standardizing configuration values\n- Adding computed or derived properties\n- Implementing versioning or change tracking\n- Dynamically looking up data using the Celigo API module to enrich configuration\n\n**Return value**\n\nThe function must return the newResource object (potentially modified) to be saved.\nThrowing an exception will prevent saving and signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called just before\nthe resource is saved.\n\nThe function must follow the expected signature and return the resource object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the preSave function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}},"ResourceResponse":{"type":"object","description":"Core response fields shared by all Celigo resources","properties":{"_id":{"type":"string","format":"objectId","readOnly":true,"description":"Unique identifier for the resource.\n\nThe _id is used in:\n- API endpoints that operate on a specific resource (e.g., GET, PUT, DELETE)\n- References from other resources (e.g., flows that use this resource)\n- Job history and error tracking\n\nFormat: 24-character hexadecimal string\n"},"createdAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was initially created.\n\nThis read-only field is automatically set during resource creation and cannot\nbe modified. It provides an audit trail for when the resource was first added\nto the system, which can be useful for:\n\n- Resource lifecycle management\n- Audit and compliance reporting\n- Troubleshooting integration timelines\n- Identifying older resources that may need review\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"},"lastModified":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was most recently updated.\n\nThis read-only field is automatically updated whenever any property of the\nresource is modified. It provides an audit trail that can be used for:\n\n- Determining if a resource has changed since it was last reviewed\n- Monitoring configuration changes during troubleshooting\n- Implementing cache invalidation strategies\n- Synchronizing related resources based on modification time\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix)\nand will always be equal to or later than the createdAt timestamp.\n"},"deletedAt":{"type":["string","null"],"format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was marked for deletion.\n\nWhen this field is present and contains a valid timestamp, it indicates\nthat the resource has been soft-deleted (moved to the recycle bin) but not\nyet permanently removed from the system. This allows for recovery of\naccidentally deleted resources within a specified retention period.\n\nThe deletedAt timestamp enables:\n- Filtering deleted resources from active resource listings\n- Implementing time-based retention policies for permanent deletion\n- Tracking deletion events for audit and compliance purposes\n- Resource recovery workflows with clear timeframes\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\nWhen null or absent, the resource is considered active.\n"}},"required":["_id"]},"IAResourceResponse":{"type":"object","description":"Integration app response fields for resources that are part of integration apps","properties":{"_integrationId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the specific integration instance that contains this resource.\n\nThis field is only populated for resources that are part of an integration app\ninstallation. It contains the unique identifier (_id) of the integration\nresource that was installed in the account.\n\nThe integration instance represents a specific installed instance of an\nintegration app, with its own configuration, settings, and runtime environment.\n\nThis reference enables:\n- Tracing the resource back to its parent integration instance\n- Permission and access control based on integration ownership\n- Lifecycle management (enabling/disabling, updating, or uninstalling)\n"},"_connectorId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the integration app that defines this resource.\n\nThis field is only populated for resources that are part of an integration app.\nIt contains the unique identifier (_id) of the integration app (connector)\nthat defines the structure, behavior, and templates for this resource.\n\nThe integration app is the published template that can be installed\nmultiple times across different accounts, with each installation creating\na separate integration instance (referenced by _integrationId).\n\nThis reference enables:\n- Identifying the source integration app for this resource\n- Determining which template version is being used\n- Linking to documentation, support, and marketplace information\n"}}},"AIDescription":{"type":"object","description":"AI-generated descriptions and documentation for the resource.\n\nThis object contains automatically generated content that helps users\nunderstand the purpose, behavior, and configuration of the resource without\nrequiring them to analyze the technical details. The AI-generated content\nis sanitized and safe for display in the UI.\n","properties":{"summary":{"type":"string","description":"Brief AI-generated summary of the resource's purpose and functionality.\n\nThis concise description provides a quick overview of what the resource does,\nwhat systems it interacts with, and its primary role in the integration.\nThe summary is suitable for display in list views, dashboards, and other\ncontexts where space is limited.\n\nMaximum length: 10KB\n"},"detailed":{"type":"string","description":"Comprehensive AI-generated description of the resource's functionality.\n\nThis detailed explanation covers the resource's purpose, configuration details,\ndata flow patterns, filtering logic, and other technical aspects. It provides\nin-depth information suitable for documentation, tooltips, or detailed views\nin the administration interface.\n\nThe content may include HTML formatting for improved readability.\n\nMaximum length: 10KB\n"},"generatedOn":{"type":"string","format":"date-time","description":"Timestamp indicating when the AI description was generated.\n\nThis field helps track the freshness of the AI-generated content and\ndetermine when it might need to be regenerated due to changes in the\nresource's configuration or behavior.\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"}}},"APIM":{"type":"array","description":"Read-only field that stores information about the integration resources\npublished in the API Management (APIM) platform.\n\nThis field tracks the relationship between integrator.io resources and their\npublished counterparts in the Gravitee API Management platform, which is\ntightly integrated with the Celigo UI. When resources are \"pushed\" to Gravitee,\nthis field is populated with the relevant identifiers and statuses.\n","items":{"type":"object","properties":{"apiId":{"type":"string","description":"Identifier for the API where this integrator.io resource is published in the APIM.\n\nThis is a Gravitee resource identifier (not prefixed with underscore like Celigo IDs)\nthat uniquely identifies the API in the API Management platform.\n"},"flowId":{"type":"string","description":"Identifier for the flow within the API where this integrator.io resource is linked.\n\nWhen an API has multiple integrator.io resources linked, each resource is associated\nwith a specific flow in the API, identified by this field. This is a Gravitee\nresource identifier.\n"},"status":{"type":"string","description":"Indicates the publishing stage of the integrator.io resource in APIM.\n\nPossible values:\n- 'oaspending': The resource is published but the OpenAPI Specification (OAS) is not\n  yet published. The apiId will be updated with the API ID created in APIM.\n- 'published': The OpenAPI Specification for the integrator.io resource has been\n  successfully uploaded to APIM.\n","enum":["oaspending","published"]}}}},"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"404-not-found":{"description":"Not found. The requested resource does not exist or is not visible to the caller.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}}},"paths":{"/v1/exports/{_id}":{"get":{"summary":"Get an export","description":"Returns the complete configuration of a specific export.\n","operationId":"getExportById","tags":["Exports"],"parameters":[{"name":"_id","in":"path","description":"The unique identifier of the export","required":true,"schema":{"type":"string","format":"objectId"}}],"responses":{"200":{"description":"Export retrieved successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Response"}}}},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"$ref":"#/components/responses/404-not-found"}}}}}}
````

## Update an export

> Updates an existing export with the provided configuration.\
> This is used for major updates to an export's structure or behavior.<br>

````json
{"openapi":"3.1.0","info":{"title":"Exports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"Request":{"type":"object","description":"Fields that can be sent when creating or updating an export","properties":{"name":{"type":"string","description":"Descriptive identifier for the export resource in human-readable format.\n\nThis string serves as the primary display name for the export across the application UI and is used in:\n- API responses when listing exports\n- Error and audit logs for traceability\n- Flow builder UI components\n- Job history and monitoring dashboards\n\nWhile not required to be globally unique in the system, using descriptive, unique names is strongly recommended\nfor clarity when managing multiple integrations. The name should indicate the data source and purpose.\n\nMaximum length: 255 characters\nAllowed characters: Letters, numbers, spaces, and basic punctuation\n"},"description":{"type":"string","description":"Optional free-text field that provides additional context about the export's purpose and functionality.\n\nWhile not used for operational functionality in the API, this field serves several important purposes:\n- Helps document the intended data flow for this export\n- Provides context for other developers and systems interacting with this resource\n- Appears in the admin UI and export listings for easier identification\n- Can be used by AI agents to better understand the export's purpose when making recommendations\n\nBest practice is to include information about:\n- The source system and data being exported\n- The intended destination for this data\n- Any special filtering or business rules applied\n- Dependencies on other systems or processes\n\nMaximum length: 10240 characters\n","maxLength":10240},"_connectionId":{"format":"objectId","type":"string","description":"Reference to the connection resource that this export will use to access the external system.\n\nThis field contains the unique identifier of a connection resource that must exist in the system prior to creating the export.\nThe connection provides:\n- Authentication credentials and methods for the external system\n- Base URL and connectivity settings\n- Rate limiting and retry configurations\n- Connection-specific headers and parameters\n\nThe connection type must be compatible with the adaptorType specified for this export.\nFor example, if adaptorType is \"HTTPExport\", _connectionId must reference a connection with type \"http\".\n\nThis field is not required for webhook/listener exports.\n\nFormat: 24-character hexadecimal string\n"},"adaptorType":{"type":"string","description":"Specifies the underlying technology adapter that processes this export's operations.\n\nThis field determines:\n- Which connection types are compatible with this export\n- Which API endpoints and protocols will be used\n- Which export-specific configuration objects must be provided\n- The available features and capabilities of the export\n\nThe value must match an available adapter in the system and should correspond to the\nexternal system being accessed. For example:\n- \"HTTPExport\" for generic REST/SOAP APIs\n- \"SalesforceExport\" for Salesforce-specific operations\n- \"NetSuiteExport\" for NetSuite-specific operations\n- \"FTPExport\" for file transfers via FTP/SFTP\n- \"WebhookExport\" for realtime event listeners that receive data via incoming HTTP requests.\n\nWhen creating an export, this field must be set correctly and cannot be changed afterward\nwithout creating a new export resource.\n\nIMPORTANT: When using a specific adapter type (e.g., \"SalesforceExport\"), you must also\nprovide the corresponding configuration object (e.g., \"salesforce\").\n","enum":["HTTPExport","FTPExport","AS2Export","S3Export","NetSuiteExport","SalesforceExport","JDBCExport","RDBMSExport","MongodbExport","DynamodbExport","WrapperExport","SimpleExport","WebhookExport","FileSystemExport"]},"type":{"type":"string","description":"Defines the fundamental operational mode of the export resource. This field determines:\n- What data is extracted and how\n- Which configuration objects are required\n- How the export appears and functions in the flow builder UI\n- The export's scheduling and execution behavior\n\n**Export types and their configurations**\n\n**Standard Export (undefined/null)**\n- **Behavior**: Retrieves all available records from the source system or structured file data that needs parsing. Default behavior is to get all records from the source system.\n- **UI Appearance**: \"Export\", \"Lookup\", or \"Transfer\" (depending on configuration)\n- **Use Case**: General purpose data extraction, full data synchronization, or structured file parsing (CSV, XML, JSON, etc.)\n- **Important Note**: For file exports that PARSE file contents into records (e.g., CSV files from NetSuite file cabinet), use this standard export type (null/undefined) with the connector's file configuration (e.g., netsuite.file). Do NOT use type=\"blob\" for parsed file exports.\n\n**\"delta\"**\n- **Behavior**: Retrieves only records changed since the last execution\n- **UI Appearance**: \"Export\" or \"Lookup\" (based on isLookup field value)\n- **Required Config**: Must provide the \"delta\" object with dateField configuration\n- **Use Case**: Incremental data synchronization, change detection\n- **Dependencies**: Requires a system that supports timestamp-based filtering\n\n**\"test\"**\n- **Behavior**: Retrieves a limited subset of records (for testing purposes)\n- **UI Appearance**: \"Export\" or \"Lookup\" (based on isLookup field value)\n- **Required Config**: Must provide the \"test\" object with limit configuration\n- **Use Case**: Integration development, testing, and validation\n\n**\"once\"**\n- **Behavior**: Retrieves records one time and marks them as processed in the source\n- **UI Appearance**: \"Export\" or \"Lookup\" (based on isLookup field value)\n- **Required Config**: Must provide the \"once\" object with booleanField configuration\n- **Use Case**: One-time exports, ensuring records aren't processed twice\n- **Dependencies**: Requires a system with updateable boolean/flag fields\n\n**\"blob\"**\n- **Behavior**: Retrieves raw files without parsing them into structured data records. The file content is transferred as-is without any parsing or transformation.\n- **UI Appearance**: \"Transfer\" flow step\n- **Required Config**: Configuration varies by connector (e.g., filepath for FTP, http.type=\"file\" for HTTP, netsuite.blob for NetSuite)\n- **Use Case**: Raw file transfers for binary files (images, PDFs, executables) where file content should NOT be parsed into data records\n- **Important Note**: Do NOT use \"blob\" when you want to parse file contents into records. For file parsing (CSV, XML, JSON files), leave type as null/undefined and configure the connector's file object (e.g., netsuite.file for NetSuite). The \"blob\" type is specifically for transferring files without parsing them.\n\n**\"webhook\"**\n- **Behavior**: Creates an endpoint that listens for incoming HTTP requests\n- **UI Appearance**: \"Listener\" flow step\n- **Required Config**: Must provide the \"webhook\" object with security configurations\n- **Use Case**: Real-time integration, event-driven architecture\n- **Dependencies**: Requires external system capable of making HTTP calls\n\n**\"distributed\"**\n- **Behavior**: Creates an endpoint that listens for incoming requests from NetSuite or Salesforce\n- **UI Appearance**: \"Listener\" flow step\n- **Required Config**: Must provide the \"distributed\" object with security configurations\n- **Use Case**: Real-time integration, event-driven architecture for NetSuite or Salesforce\n- **Dependencies**: Requires NetSuite or Salesforce to be configured to send events to the endpoint\n\n**\"simple\"**\n- **Behavior**: Allows for direct file uploads via the data loader UI\n- **UI Appearance**: \"Data loader\" flow step\n- **Required Config**: Must provide the \"simple\" object with file format configuration\n- **Use Case**: Manual data uploads, user-driven data integration\n\nThe value directly affects which configuration objects must be provided in the export resource.\nFor example, if type=\"delta\", you must include a valid \"delta\" object in your configuration.\n","enum":["webhook","test","delta","once","tranlinedelta","simple","blob","distributed","stream"]},"pageSize":{"type":"integer","description":"Controls the number of records in each data page when streaming data between systems.\n\nThis field directly impacts how data is streamed from the source to destination system:\n- Records are exported in batches (pages) of this size\n- Each page is immediately sent to the destination system upon completion\n- Pages are capped at a maximum size of 5 MB regardless of record count\n- Processing continues with the next page until all data is transferred\n\nConsiderations for setting this value:\n- The destination system's API often imposes limits on batch sizes\n  (e.g., NetSuite and Salesforce have specific record limits per API call)\n- Larger values improve throughput for simple records but may cause timeouts with complex data\n- Smaller values provide more granular error recovery but increase the number of API calls\n- Finding the optimal value typically requires balancing source system export speed with\n  destination system import capacity\n\nThe value must be a positive integer. If not specified, the default value is 20.\nThere is no built-in maximum value, but practical limits are determined by:\n1. The 5 MB maximum page size limit\n2. The destination system's API constraints\n3. Memory and performance considerations\n","default":20},"dataURITemplate":{"type":"string","description":"Defines a template for generating direct links to records in the source application's UI.\n\nThis field uses handlebars syntax to create dynamic URLs or identifiers based on the exported data.\nThe template is evaluated for each record processed by the export, and the resulting URL is:\n- Stored with error records in the job history database\n- Displayed in the error logs and job monitoring UI\n- Available to downstream steps via the errorContext object\n\nThe template can reference any field in the exported record using the handlebars pattern:\n{{record.fieldName}}\n\nCommon patterns by system type:\n- Salesforce: \"https://my.salesforce.com/lightning/r/Contact/{{record.Id}}/view\"\n- NetSuite: \"https://system.netsuite.com/app/common/entity/custjob.nl?id={{record.internalId}}\"\n- Shopify: \"https://your-store.myshopify.com/admin/customers/{{record.id}}\"\n- Generic APIs: \"{{record.id}}\" or \"{{record.customer_id}}, {{record.email}}\"\n\nThis field is optional but recommended for improved error handling and debugging.\n"},"traceKeyTemplate":{"type":"string","description":"Defines a template for generating unique identifiers for each record processed by this export.\n\nThis field allows you to override the system's default record identification logic by specifying\nexactly which field(s) should be used to uniquely identify each record. The trace key is used to:\n- Track records through the entire integration process\n- Identify duplicate records in the job history\n- Match updated records to previously processed ones\n- Generate unique references in error reporting\n\nThe template uses handlebars syntax and can reference:\n- Single fields: {{record.id}}\n- Combined fields: {{join \"_\" record.customerId record.orderId}}\n- Modified fields: {{lowercase record.email}}\n\nIf a transformation is applied to the exported data before the trace key is evaluated,\nfield references should omit the \"record.\" prefix (e.g., {{id}} instead of {{record.id}}).\n\nIf not specified, the system attempts to identify a unique field in each record automatically,\nbut this may not always select the optimal field for identification.\n\nMaximum length of generated trace keys: 512 characters\n"},"oneToMany":{"$ref":"#/components/schemas/OneToMany"},"pathToMany":{"$ref":"#/components/schemas/PathToMany"},"isLookup":{"type":"boolean","description":"Controls whether this export operates as a lookup resource in integration flows.\n\nWhen set to true, this export's behavior fundamentally changes:\n- It expects and requires input data from a previous flow step\n- It uses input data to dynamically parameterize the export operation\n- The system injects input record fields into API requests via handlebars templates\n- Flow execution waits for this step to complete before proceeding\n- Results are directly passed to subsequent steps\n\nLookup exports are typically used to:\n- Retrieve additional details about records processed earlier in the flow\n- Find matching records in a target system for reference or update operations\n- Enrich data with information from external services\n- Validate data against reference sources\n\nAPI behavior differences when true:\n- Request templating uses both record context and other handlebars variables\n- Export is executed once per input record (or batch, depending on configuration)\n- Rate limiting and concurrency controls apply differently\n\nWhen false (default), the export operates in standard extraction mode, pulling data\nindependently without requiring input from previous flow steps.\n"},"groupByFields":{"$ref":"#/components/schemas/GroupBy"},"delta":{"$ref":"#/components/schemas/Delta"},"test":{"$ref":"#/components/schemas/Test"},"once":{"$ref":"#/components/schemas/Once"},"webhook":{"$ref":"#/components/schemas/Webhook"},"distributed":{"$ref":"#/components/schemas/Distributed"},"filesystem":{"$ref":"#/components/schemas/FileSystem"},"simple":{"type":"object","description":"Configuration for data loader exports that only run in data loader specific flows.\nNote: This field and all its properties are only relevant when the 'type' field is set to 'simple'.\n","properties":{"file":{"$ref":"#/components/schemas/File"}}},"http":{"$ref":"#/components/schemas/Http"},"file":{"$ref":"#/components/schemas/File"},"salesforce":{"$ref":"#/components/schemas/Salesforce"},"as2":{"$ref":"#/components/schemas/AS2"},"dynamodb":{"$ref":"#/components/schemas/DynamoDB"},"ftp":{"$ref":"#/components/schemas/FTP"},"jdbc":{"$ref":"#/components/schemas/JDBC"},"mongodb":{"$ref":"#/components/schemas/MongoDB"},"netsuite":{"$ref":"#/components/schemas/NetSuite"},"rdbms":{"$ref":"#/components/schemas/RDBMS"},"s3":{"$ref":"#/components/schemas/S3"},"wrapper":{"$ref":"#/components/schemas/Wrapper"},"parsers":{"$ref":"#/components/schemas/Parsers"},"filter":{"allOf":[{"description":"Configuration for selectively processing records from an export based on their field values.\nThis object enables precise control over which records continue through the flow.\n\n**Filter behavior**\n\nWhen configured, the filter is applied immediately after records are retrieved from the source system:\n- Records that match the filter criteria continue through the flow\n- Records that don't match are silently dropped\n- No partial record processing is performed\n\n**Available filter fields**\nThe fields available for filtering are the data fields from each record retrieved by the export.\n"},{"$ref":"#/components/schemas/Filter"}]},"inputFilter":{"allOf":[{"description":"Configuration for selectively processing input records in a lookup export.\n\nThis filter is only relevant for exports where `isLookup` is set to `true`, meaning\nthe export is being used as a flow step to retrieve additional data for records\nprocessed in previous steps.\n\n**Input filter behavior**\n\nWhen configured in a lookup export, this filter is applied to the incoming records\nbefore they are used to query the external system:\n- Only input records that match the filter criteria will trigger lookup operations\n- Records that don't match will pass through the step without being enriched\n- This can significantly improve performance by reducing unnecessary API calls\n\n**Use cases**\n\nCommon scenarios for using inputFilter include:\n- Only looking up additional data for records that meet certain criteria\n- Preventing API calls for records that already have the required data\n- Implementing conditional lookup logic based on record properties\n- Reducing API call volume to stay within rate limits\n\n**Available filter fields**\nThe fields available for filtering are the data fields from the input records\npassed to this lookup export from previous flow steps.\n"},{"$ref":"#/components/schemas/Filter"}]},"mappings":{"allOf":[{"description":"Field mapping configurations applied to the input records of a\nlookup export before the lookup HTTP request is made.\n\n**When this field is valid**\n\nCeligo only supports `mappings` on exports that meet **both**\nof the following conditions:\n\n1. `isLookup` is `true` (the export is being used as a\n   lookup step, not a source export), AND\n2. `adaptorType` is `\"HTTPExport\"` (generic REST/SOAP\n   HTTP lookup — not NetSuite, Salesforce, RDBMS, file-based,\n   or any other adaptor).\n\nFor any other combination (source exports, non-HTTP lookup\nexports), do not set this field.\n\n**Behavior when valid**\n\nWhen used on a lookup HTTP export, `mappings` transforms\neach incoming record from the upstream flow step before the\nlookup HTTP call is made:\n\n- Input records are reshaped according to the mapping rules.\n- The transformed record flows into the HTTP request — either\n  directly as the request body, or further shaped by an\n  `http.body` Handlebars template which then renders\n  against the post-mapped record.\n- Useful when the lookup target API expects a request\n  structure that differs from the upstream record shape.\n"},{"$ref":"#/components/schemas/Mappings"}]},"transform":{"allOf":[{"description":"Data transformation configuration for reshaping records during export operations.\n\n**Export-specific behavior**\n\n**Source Exports**: Transforms records retrieved from the external system before they are passed to downstream flow steps.\n\n**Lookup Exports (isLookup: true)**: Transforms the lookup results returned by the external system.\n\n**Critical requirement for lookups**\n\nFor NetSuite and most other API-based lookups, this field is **ESSENTIAL**. Raw lookup results often come in nested or complex formats that differ from what the flow requires. You **MUST** use a transform to:\n1. Flatten nested structures (e.g., `results[0].id` -> `id`)\n2. Map specific fields to the top level\n3. Handle empty results gracefully\n\nNote that transformed results are not automatically merged back into source records - merging is handled separately by the 'response mapping' configuration in your flow definition.\n"},{"$ref":"#/components/schemas/Transform"}]},"hooks":{"type":"object","description":"Defines custom JavaScript hooks that execute at specific points during the export process.\n\nThese hooks allow for programmatic intervention in the data flow, enabling custom transformations,\nvalidations, filtering, and error handling beyond what's possible with standard configuration.\n","properties":{"preSavePage":{"type":"object","description":"Hook that executes after records are retrieved from the source system but before\nthey are sent to downstream flow steps.\n\nThis hook can transform, filter, validate, or enrich each page of data before it\nenters subsequent flow steps. Common uses include flattening nested data structures,\nremoving unwanted records, or adding computed fields.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId.\n"},"_scriptId":{"type":"string","format":"objectId","description":"Reference to a predefined script resource containing hook functions.\n\nThe referenced script should contain the function specified in the\n'function' property.\n"},"_stackId":{"type":"string","format":"objectId","description":"Reference to the stack resource associated with this hook.\n\nUsed when the hook logic is part of a stack deployment.\n"},"configuration":{"type":"object","description":"Custom configuration object passed to the hook function.\n\nThis allows passing static parameters or settings to the hook script, making the script\nreusable across different exports with different configurations.\n"}}}}},"settingsForm":{"$ref":"#/components/schemas/Form"},"settings":{"$ref":"#/components/schemas/Settings"},"mockOutput":{"$ref":"#/components/schemas/MockOutput"},"_ediProfileId":{"type":"string","format":"objectId","description":"Reference to an EDI profile that this export will use for parsing X12 EDI documents.\n\nThis field contains the unique identifier of an EDI profile resource that must exist\nin the system prior to creating or updating the export. For parsing operations, the\nEDI profile provides essential settings such as:\n- Envelope-level specifications for X12 EDI documents (ISA and GS qualifiers)\n- Trading partner identifiers and qualifiers needed for validation\n- Delimiter configurations used to properly parse the document structure\n- Version information to ensure correct segment and element interpretation\n- Validation rules to verify EDI document compliance with standards\n\nIn the export context, an EDI profile is specifically required when:\n- Parsing incoming EDI documents into a structured JSON format\n- Extracting data elements from raw EDI files\n- Validating incoming EDI document structure against trading partner requirements\n- Converting EDI segments and elements into a format usable by downstream flow steps\n\nThe centralized profile approach ensures parsing consistency across all exports\nand prevents scattered configuration of parsing rules across multiple resources.\n\nFormat: 24-character hexadecimal string\n"},"_postParseListenerId":{"type":"string","format":"objectId","description":"Reference to a webhook export that will be automatically invoked after EDI parsing operations.\n\nThis field contains the unique identifier of another export resource (of type \"webhook\")\nthat will be called when an EDI file is processed, regardless of parsing success or failure.\n\n**Invocation behavior**\n\nThe listener is invoked once per file, with the following behaviors:\n\n| Scenario | Behavior |\n| --- | --- |\n| Successfully parsed EDI file | The listener is invoked with the parsed payload, with no error fields present |\n| Unable to parse EDI file | The listener is invoked with the payload and error information in the payload |\n\n**Supported adapter types**\n\nCurrently, this functionality is only supported for:\n- AS2Export (when parsing EDI files)\n- FTPExport (when parsing EDI files)\n\nSupport for additional adapters will be added in future releases.\n\n**Primary use case**\n\nThe primary purpose of this field is to enable automatic sending of functional acknowledgements\n(such as 997 or 999) after receiving EDI documents, whether the parse was successful or not.\nThis allows for immediate feedback to trading partners about document receipt and processing status.\n\nFormat: 24-character hexadecimal string\n"},"preSave":{"$ref":"#/components/schemas/PreSave"},"assistant":{"type":"string","description":"Identifier for the connector assistant used to configure this export."},"assistantMetadata":{"type":"object","additionalProperties":true,"description":"Metadata associated with the connector assistant configuration."},"sampleData":{"type":"string","description":"Sample data payload used for previewing and testing the export."},"sampleHeaders":{"type":"array","description":"Sample HTTP headers used for previewing and testing the export.","items":{"type":"object","properties":{"name":{"type":"string","description":"Header name."},"value":{"type":"string","description":"Header value."}}}},"sampleQueryParams":{"type":"array","description":"Sample query parameters used for previewing and testing the export.","items":{"type":"object","properties":{"name":{"type":"string","description":"Query parameter name."},"value":{"type":"string","description":"Query parameter value."}}}}},"required":["name"]},"OneToMany":{"type":"boolean","description":"Controls whether the resource treats child records within parent records as the primary data units.\n\n**Important: this is not for specifying where records are in an api response**\n\nIf you need to tell an export where to find the array of records in the HTTP response\nbody (e.g. \"the records are at data.items\"), use `http.response.resourcePath` instead.\n`oneToMany` serves a completely different purpose — it operates on records that have\nalready been extracted from the response.\n\n**What oneToMany actually does**\n\nWhen set to true, this field fundamentally changes how record data is processed:\n- The system will \"unwrap\" nested child records from their parent containers\n- Each child record becomes a separate output record for downstream processing\n- The pathToMany field must be set to indicate where these child records are located\n- Parent record fields can still be accessed via a special \"parent\" context\n\nThis is typically used on **lookup exports** (isLookup: true) or **imports** where\nthe incoming records contain nested arrays that need to be fanned out.\n\nCommon scenarios for enabling this option:\n- Processing order line items individually from an order export\n- Handling invoice line items from an invoice export\n- Processing individual transaction lines from journal entries\n- Extracting address records from customer exports\n\nThis setting applies for the duration of the current flow step only and does not affect\nhow data is stored or structured in other flow steps.\n\nIf false (default), the resource processes each top-level record as a single unit.\n","default":false},"PathToMany":{"type":"string","description":"Specifies the JSON path to child records when oneToMany mode is enabled.\n\nThis field is only used when oneToMany is set to true. It defines the exact location\nof child records within the parent record structure using dot notation:\n\n- Simple path: \"items\" for a direct child array field\n- Nested path: \"lines.lineItems\" for a more deeply nested array\n- Multi-level: \"details.items.subitems\" for deeply nested structures\n\nThe system uses this path to:\n- Locate the array of child records within each parent record\n- Extract each array element as a separate record for processing\n- Make both the child record data and parent context available to downstream steps\n\nImportant considerations:\n- The path must point to an array field\n- For row-based data (i.e. where Celigo models this via an array or arrays of objects), this field is not required\n- If the path is invalid or doesn't exist, the resource will report success but process zero records\n- Maximum path depth: 10 levels\n\nThis field must contain a valid JSON path expression using dot notation.\n"},"GroupBy":{"type":"array","description":"Specifies which fields to use for grouping records in the export results. When configured, records with\nthe same values in these fields will be grouped together and treated as a single record by downstream\nsteps in your flow.\n\nFor example:\n- Group sales orders by customer ID to process all orders for each customer together\n- Group journal entries by accounting period to consolidate related transactions\n- Group inventory items by location to process inventory by warehouse\n\nWhen grouping is used, the export's page size determines the maximum number of groups per page, not individual\nrecords. Note that effective grouping typically requires that records with the same group field values appear\ntogether in the export data.\n","items":{"type":"string"}},"Delta":{"type":"object","description":"Configuration object for incremental data exports that retrieve only changed records.\n\nThis object is REQUIRED when the export's type field is set to \"delta\" and should not be\nincluded for other export types. Delta exports are designed for efficient synchronization\nby retrieving only records that have been created or modified since the last execution.\n\n**Default cutoff behavior (NO USER-SUPPLIED CUTOFF)**\nIf the user prompt does not specify a cutoff timestamp, delta exports MUST default to using\nthe platform-managed *last successful run* timestamp. In integrator.io this is exposed to\nHTTP exports and scripts as the `{{lastExportDateTime}}` variable.\n\n- First run: behaves like a full export (no cutoff available yet)\n- Subsequent runs: uses `{{lastExportDateTime}}` as the lower bound (cutoff)\n\nFor optimal AI agent implementation, consider these guidelines:\n\n1. Primary configuration method depends on adapter type:\n    - For HTTP exports: Use {{lastExportDateTime}} variable in relativeURI or body\n    - For specific application adapters: Use dateField to specify timestamp fields\n\n2. The system automatically maintains the last successful run timestamp\n    - No need to store or manage timestamps in your own code\n    - First run fetches all records (equivalent to a standard export)\n    - Subsequent runs use this timestamp as the starting point\n\n3. Error handling and recovery:\n    - If an export fails, the next run uses the last successful timestamp\n    - Records created/modified during a failed run will be included in the next run\n    - The lagOffset field can be used to handle edge cases\n","properties":{"dateField":{"type":"string","description":"Specifies one or more timestamp fields to filter records by modification date.\n\n**Field behavior**\n\nThis field determines which record timestamp(s) are compared against the last successful run time\nto identify changed records. Key characteristics:\n\n- REQUIRED for most adapter types (except HTTP and REST where this field is not supported)\n- Can reference a single field or multiple comma-separated fields\n- Field(s) must exist in the source system and contain valid date/time values\n- When multiple fields are specified, they are processed sequentially\n\n**Implementation patterns**\n\n**Single Field Pattern**\n```\n\"dateField\": \"lastModifiedDate\"\n```\n- Records where lastModifiedDate > last run time are exported\n- Most common pattern, suitable for most applications\n- Works when a single field reliably tracks all changes\n\n**Multiple Field Pattern**\n```\n\"dateField\": \"createdAt,lastModified\"\n```\n- First exports records where createdAt > last run time\n- Then exports records where lastModified > last run time\n- Useful when different operations update different timestamp fields\n- Handles cases where some records only have creation timestamps\n\n**Critical adaptor-specific instruction:**\n- The adaptor type is HTTP. For HTTP exports, the \"dateField\" property MUST NOT be included in the delta configuration.\n- HTTP exports use the {{lastExportDateTime}} variable directly in the relativeURI or body instead of dateField.\n- DO NOT include \"dateField\" in your response. If you include it, the configuration will be invalid.\n\nExample HTTP query with implicit delta:\n```\n\"/api/v1/users?modified_since={{lastExportDateTime}}\"\n\nExample (Business Central) newly created records:\n```\n\"/businesscentral/companies({{record.companyId}})/customers?$filter=systemCreatedAt gt {{lastExportDateTime}}\"\n```\n\nFor Salesforce, this field is required and has the following default values:\n- LastModifiedDate\n- CreatedDate\n- SystemModstamp\n- LastActivityDate\n- LastViewedDate\n- LastReferencedDate\nAlso, any custom fields that are not listed above but are timestamp fields will be added to the default values.\n```\n"},"dateFormat":{"type":"string","description":"Defines the date/time format expected by the source system's API.\n\n**Field behavior**\n\nThis field controls how the system formats the timestamp used for filtering:\n\n- OPTIONAL: Only needed when the source system doesn't support ISO8601\n- Default: ISO8601 format (YYYY-MM-DDTHH:mm:ss.sssZ)\n- Uses Moment.js formatting tokens\n- Directly affects the format of {{lastExportDateTime}} when used in HTTP requests\n\n**Implementation patterns**\n\n**Standard Date Format**\n```\n\"dateFormat\": \"YYYY-MM-DD\"  // 2023-04-15\n```\n- For APIs that accept date-only values\n- Will truncate time portion (potentially creating a wider filter window)\n\n**Custom DateTime Format**\n```\n\"dateFormat\": \"MM/DD/YYYY HH:mm:ss\"  // 04/15/2023 14:30:00\n```\n- For APIs with specific formatting requirements\n- Especially common with older or proprietary systems\n\n**Localized Format**\n```\n\"dateFormat\": \"DD-MMM-YYYY HH:mm:ss\"  // 15-Apr-2023 14:30:00\n```\n- For systems requiring locale-specific representations\n- Often needed for ERP systems or regional applications\n\nLeave this field unset unless the source system explicitly requires a non-ISO8601 format.\n"},"lagOffset":{"type":"integer","description":"Specifies a time buffer (in milliseconds) to account for system data propagation delays.\n\n**Field behavior**\n\nThis field addresses synchronization issues caused by replication or indexing delays:\n\n- OPTIONAL: Only needed for systems with known data visibility delays\n- Value is SUBTRACTED from the last successful run timestamp\n- Creates an overlapping window to catch records that were being processed\n  during the previous export\n- Measured in milliseconds (1000ms = 1 second)\n\n**Implementation pattern**\n\nThe formula for the effective filter date is:\n```\neffectiveFilterDate = lastSuccessfulRunTime - lagOffset\n```\n\n**Common values**\n\n- 15000 (15 seconds): Typical for systems with short indexing delays\n- 60000 (1 minute): Common for systems with moderate replication lag\n- 300000 (5 minutes): For systems with significant processing delays\n\n**Diagnosis**\n\nThis field should be configured when you observe:\n- Records occasionally missing from delta exports\n- Records created/modified near the export run time being skipped\n- Inconsistent results between runs with similar data changes\n\nIMPORTANT: Setting this value too high decreases efficiency by processing\nredundant records. Set only as high as needed to avoid missed records.\n"}}},"Test":{"type":"object","description":"Configuration object for limiting data volume during development and testing.\n\nThis object is REQUIRED when the export's type field is set to \"test\" and should not be\nincluded for other export types. Test exports are designed to safely retrieve small data\nsamples without processing full datasets, making them ideal for development and validation.\n\nFor optimal AI agent implementation, consider these guidelines:\n\n1. Test exports behave identically to standard exports except for the record limit\n    - All filters, pagination, and processing logic remain intact\n    - Only the total output volume is artificially capped\n\n2. Test exports do not store state between runs\n    - Unlike delta exports, each test export starts fresh\n    - No need to reset any state when transitioning from test to production\n\n3. Common implementation scenarios:\n    - During initial integration development\n    - When diagnosing data format or transformation issues\n    - When performance testing with controlled data volumes\n    - For demonstrations or proof-of-concept implementations\n\n4. Transitioning to production:\n    - Simply change type from \"test\" to null/undefined for standard exports\n    - Change type from \"test\" to \"delta\" for incremental exports\n    - No other configuration changes are typically needed\n","properties":{"limit":{"type":"integer","description":"Specifies the maximum number of records to process in a single test export run.\n\n**Field behavior**\n\nThis field controls the data volume during test executions:\n\n- REQUIRED when the export's type field is set to \"test\"\n- Accepts integer values between 1 and 100\n- Enforced by the system regardless of pagination settings\n- Applies to top-level records (before oneToMany processing)\n\n**Implementation considerations**\n\n**Balance between volume and usefulness**\n\nThe ideal limit depends on your testing objectives:\n\n- 1-5 records: Good for initial implementation and format verification\n- 10-25 records: Useful for testing transformation logic and identifying edge cases\n- 50-100 records: Better for performance testing and data pattern analysis\n\n**System enforced maximum**\n\n```\n\"limit\": 100  // Maximum allowed value\n```\n\nThe system enforces a hard limit of 100 records for all test exports to prevent\naccidental processing of large datasets during development.\n\n**Relationship with pageSize**\n\nThe test limit overrides but does not replace the export's pageSize:\n\n- If limit < pageSize: Only one page is processed with limit records\n- If limit > pageSize: Multiple pages are processed until limit is reached\n- Either way, the total records processed will not exceed the limit value\n\nIMPORTANT: When transitioning from test to production, you don't need to remove\nthis configuration - simply change the export's type field to remove the test limit.\n","minimum":1,"maximum":100}}},"Once":{"type":"object","description":"Configuration object for flag-based exports that process records exactly once.\n\nThis object is REQUIRED when the export's type field is set to \"once\" and should not be\nincluded for other export types. Once exports use a boolean/checkbox field in the source system\nto track which records have been processed, creating a reliable idempotent data extraction pattern.\n\nFor optimal AI agent implementation, consider these guidelines:\n\n1. System behavior during execution:\n    - First, the export retrieves all records where the specified boolean field is false\n    - After successfully processing these records, the system automatically sets the field to true\n    - On subsequent runs, previously processed records are excluded\n\n2. Prerequisites in the source system:\n    - The source must have a boolean/checkbox field that can be used as a processing flag\n    - Your connection must have write access to update this field after export\n    - The field should be indexed for optimal performance\n\n3. Common implementation scenarios:\n    - One-time migrations where data should not be duplicated\n    - Processing queues where records are marked as \"processed\"\n    - Compliance scenarios requiring audit trails of exported records\n    - Implementing exactly-once delivery semantics\n\n4. Error handling behavior:\n    - If the export fails, the boolean fields remain unchanged\n    - Records will be retried on the next run\n    - No manual intervention is required for recovery\n","properties":{"booleanField":{"type":"string","description":"Specifies the API field name of the boolean/checkbox that tracks processed records.\n\n**Field behavior**\n\nThis field identifies which boolean field in the source system controls the export filtering:\n\n- REQUIRED when the export's type field is set to \"once\"\n- Must reference a valid boolean/checkbox field in the source system\n- Must be writeable by the connection's authentication credentials\n- The system performs two operations with this field:\n  1. Filters to only include records where this field is false\n  2. Updates processed records by setting this field to true\n\n**Implementation patterns**\n\n**Using dedicated tracking fields**\n```\n\"booleanField\": \"isExported\"\n```\n- Create a dedicated field specifically for integration tracking\n- Provides clear separation between business and integration logic\n- Most maintainable approach for long-term operations\n\n**Using existing status fields**\n```\n\"booleanField\": \"isProcessed\"\n```\n- Leverage existing status fields if they align with your integration needs\n- Ensure the field's meaning is compatible with your integration logic\n- Consider potential conflicts with other processes using the same field\n\n**Targeted export tracking**\n```\n\"booleanField\": \"exported_to_netsuite\"\n```\n- For systems synchronizing to multiple destinations\n- Create separate tracking fields for each destination system\n- Enables independent control of different export processes\n\n**Technical considerations**\n\n- Field updates happen in batches after each successful page of records is processed\n- The field update uses the same connection as the export operation\n- For optimal performance, the boolean field should be indexed in the source database\n- Boolean values of 0/1, true/false, and yes/no are all properly interpreted\n\nIMPORTANT: Ensure the field is not being updated by other processes, as this could\ncause records to be skipped unexpectedly. If multiple processes need to track exports,\nuse separate boolean fields for each process.\n"}}},"Webhook":{"type":"object","description":"Configuration object for real-time event listeners that receive data via incoming HTTP requests.\n\nThis object is REQUIRED when the export's type field is set to \"webhook\" and should not be\nincluded for other export types. Webhook exports create dedicated HTTP endpoints that can receive\ndata from external systems in real-time, enabling event-driven integration architectures.\n\nWhen configured, the system:\n1. Creates a unique URL endpoint for receiving HTTP requests\n2. Validates incoming requests based on your security configuration\n3. Processes the payload and passes it to subsequent flow steps\n4. Returns a configurable HTTP response to the caller\n\nFor optimal AI agent implementation, consider these guidelines:\n\n**Webhook security models**\n\nWebhooks support multiple security verification methods, each requiring different fields:\n\n1. **HMAC Verification** (Most secure, recommended for production)\n    - Required fields: verify=\"hmac\", key, algorithm, encoding, header\n    - Verifies a cryptographic signature included with each request\n    - Ensures data integrity and authenticity\n\n2. **Token Verification** (Simple shared secret)\n    - Required fields: verify=\"token\", token, path\n    - Checks for a specific token value in the request\n    - Simpler but less secure than HMAC\n\n3. **Basic Authentication** (HTTP standard)\n    - Required fields: verify=\"basic\", username, password\n    - Uses HTTP Basic Authentication headers\n    - Compatible with most HTTP clients\n\n4. **Secret URL** (Simplest but least secure)\n    - Required fields: verify=\"secret_url\", token\n    - Relies solely on URL obscurity for security\n    - The token is embedded in the webhook URL to create a unique, hard-to-guess endpoint\n    - Suitable only for non-sensitive data or testing\n\n5. **Public Key** (Advanced, for specific providers)\n    - Required fields: verify=\"publickey\", key\n    - Uses public key cryptography for verification\n    - Only available for certain providers\n\n**Response customization**\n\nYou can customize how the webhook responds to callers with these field groups:\n\n1. **Standard Success Response**\n    - Fields: successStatusCode, successBody, successMediaType, successResponseHeaders\n    - Controls how the webhook responds to valid requests\n\n2. **Challenge Response** (For subscription verification)\n    - Fields: challengeSuccessBody, challengeSuccessStatusCode, challengeSuccessMediaType, challengeResponseHeaders\n    - Controls how the webhook responds to verification/challenge requests\n\n**Implementation scenarios**\n\nWebhooks are commonly used for:\n\n1. **Real-time data synchronization**\n    - E-commerce platforms sending order notifications\n    - CRM systems delivering contact updates\n    - Payment processors reporting transaction events\n\n2. **Event-driven processes**\n    - Triggering fulfillment when orders are placed\n    - Initiating approval workflows on document submissions\n    - Executing business logic when status changes occur\n\n3. **System integration**\n    - Connecting SaaS applications without polling\n    - Building composite applications from microservices\n    - Creating fan-out architectures for event distribution\n","properties":{"provider":{"type":"string","description":"Specifies the source application sending webhook data, enabling platform-specific optimizations.\n\n**Field behavior**\n\nThis field determines how the webhook handles incoming requests:\n\n- OPTIONAL: Defaults to \"custom\" if not specified\n- When a specific provider is selected, the system:\n  1. Pre-configures appropriate security settings for that platform\n  2. Applies platform-specific payload parsing rules\n  3. May enable additional features only relevant to that provider\n\n**Implementation guidance**\n\n**Provider-specific configurations**\n\nWhen you know the exact source system, select its specific provider:\n```\n\"provider\": \"shopify\"\n```\n- Automatically configures proper HMAC verification settings\n- Optimizes payload parsing for Shopify's webhook format\n- May enable additional Shopify-specific features\n\n**Custom configuration**\n\nFor generic webhooks or unlisted providers, use custom:\n```\n\"provider\": \"custom\"\n```\n- Requires manual configuration of all security settings\n- Maximum flexibility for handling any webhook format\n- Recommended for custom applications or newer platforms\n\n**Selection criteria**\n\nChoose a specific provider when:\n- The source system is explicitly listed in the enum values\n- You want to leverage pre-configured settings\n- The integration must follow platform-specific practices\n\nChoose \"custom\" when:\n- The source system is not listed\n- You need full control over webhook configuration\n- You're building a custom interface or protocol\n\nIMPORTANT: Some providers enforce specific security methods. When selecting a\nprovider, ensure you have the necessary security credentials (tokens, keys, etc.)\nas required by that platform.\n","enum":["github","shopify","travis","travis-org","slack","dropbox","onfleet","helpscout","errorception","box","stripe","aha","jira","pagerduty","postmark","mailchimp","intercom","activecampaign","segment","recurly","shipwire","surveymonkey","parseur","mailparser-io","hubspot","integrator-extension","custom","sapariba","happyreturns","typeform"]},"verify":{"type":"string","description":"Defines the security verification method used to authenticate incoming webhook requests.\n\n**Field behavior**\n\nThis field is the primary control for webhook security:\n\n- REQUIRED for all webhook exports\n- Determines which additional security fields must be configured\n- Controls how incoming requests are validated before processing\n\n**Verification methods**\n\n**Hmac Verification**\n```\n\"verify\": \"hmac\"\n```\n- Most secure method, cryptographically verifies request integrity\n- REQUIRES: key, algorithm, encoding, header fields\n- Validates a cryptographic signature included in the request header\n- Works well with providers that support HMAC (Shopify, Stripe, GitHub, etc.)\n\n**Token Verification**\n```\n\"verify\": \"token\"\n```\n- Simple verification using a shared secret token\n- REQUIRES: token, path fields\n- Checks for a specific token value in the request body or query params\n- Good for simple scenarios with trusted networks\n\n**Basic Authentication**\n```\n\"verify\": \"basic\"\n```\n- Standard HTTP Basic Authentication\n- REQUIRES: username, password fields\n- Validates credentials sent in the Authorization header\n- Compatible with most HTTP clients and tools\n\n**Public Key**\n```\n\"verify\": \"publickey\"\n```\n- Advanced verification using public key cryptography\n- REQUIRES: key field (containing the public key)\n- Only available for certain providers that use asymmetric cryptography\n- Highest security level but more complex to configure\n\n**Secret url**\n```\n\"verify\": \"secret_url\"\n```\n- Simplest method, relies solely on the obscurity of the URL\n- REQUIRES: token field (the token is embedded in the webhook URL to create a unique, hard-to-guess endpoint)\n- Only suitable for non-sensitive data or testing environments\n- Not recommended for production use with sensitive data\n\nIMPORTANT: Choose the security method that matches your source system's capabilities.\nIf the source system supports multiple verification methods, HMAC is generally the\nmost secure option.\n","enum":["token","hmac","basic","publickey","secret_url"]},"token":{"type":"string","description":"Specifies the shared secret token value used to verify incoming webhook requests.\n\n**Field behavior**\n\nThis field defines the expected token value:\n\n- REQUIRED when verify=\"token\" or verify=\"secret_url\"\n- When verify=\"token\": must be a string that exactly matches what the sender will provide. Used with the path field to locate and validate the token in the request.\n- When verify=\"secret_url\": the token is embedded in the webhook URL to create a unique, hard-to-guess endpoint. Generate a random, high-entropy value.\n- Case-sensitive and whitespace-sensitive\n\n**Implementation guidance**\n\nThe token verification flow works as follows:\n1. The webhook receives an incoming request\n2. The system looks for the token at the location specified by the path field\n3. If the found value exactly matches this token value, the request is processed\n4. If no match is found, the request is rejected with a 401 error\n\n**Security best practices**\n\nFor maximum security:\n- Use a random, high-entropy token (32+ characters)\n- Include a mix of uppercase, lowercase, numbers, and special characters\n- Don't use predictable values like company names or common words\n- Rotate tokens periodically for sensitive integrations\n\n**Common implementations**\n\n```\n\"token\": \"3a7c4f8b2e9d1a5c6b3e7d9f2a1c5b8e\"\n```\n\n```\n\"token\": \"whsec_8fb2e91a5c6b3e7d9f2a1c5b8e3a7c4f\"\n```\n\nIMPORTANT: Never share this token in public repositories or documentation.\nTreat it as a sensitive credential similar to a password.\n"},"algorithm":{"type":"string","description":"Specifies the cryptographic hashing algorithm used for HMAC signature verification.\n\n**Field behavior**\n\nThis field determines how signatures are validated:\n\n- REQUIRED when verify=\"hmac\"\n- Must match the algorithm used by the webhook sender\n- Affects security strength and compatibility\n\n**Algorithm selection**\n\n**Sha-256 (Recommended)**\n```\n\"algorithm\": \"sha256\"\n```\n- Modern, secure hash algorithm\n- Industry standard for most new webhook implementations\n- Preferred choice for all new integrations\n- Used by Shopify, Stripe, and many modern platforms\n\n**Sha-1 (Legacy)**\n```\n\"algorithm\": \"sha1\"\n```\n- Older, less secure algorithm\n- Still used by some legacy systems\n- Only select if the provider explicitly requires it\n- GitHub webhooks used this historically\n\n**Sha-384/sha-512 (High Security)**\n```\n\"algorithm\": \"sha384\"\n\"algorithm\": \"sha512\"\n```\n- Higher security variants with longer digests\n- Use when specified by the provider or for sensitive data\n- Less common but supported by some security-focused systems\n\nIMPORTANT: This MUST match the algorithm used by the webhook sender.\nMismatched algorithms will cause all webhook requests to be rejected.\n","enum":["sha1","sha256","sha384","sha512"]},"encoding":{"type":"string","description":"Specifies the encoding format used for the HMAC signature in webhook requests.\n\n**Field behavior**\n\nThis field determines how signature values are encoded:\n\n- REQUIRED when verify=\"hmac\"\n- Must match the encoding used by the webhook sender\n- Affects how binary signature values are represented as strings\n\n**Encoding options**\n\n**Hexadecimal (hex)**\n```\n\"encoding\": \"hex\"\n```\n- Represents the signature as a string of hexadecimal characters (0-9, a-f)\n- Most common encoding for web-based systems\n- Used by many platforms including Stripe and some Shopify implementations\n- Example output: \"8f7d56a32e1c9b47d882e3aa91341f64\"\n\n**Base64**\n```\n\"encoding\": \"base64\"\n```\n- Represents the signature using base64 encoding\n- More compact than hex (about 33% shorter)\n- Used by platforms like Shopify (newer implementations) and some GitHub scenarios\n- Example output: \"j31WozbhtrHYeC46qRNB9k==\"\n\nIMPORTANT: This MUST match the encoding used by the webhook sender.\nMismatched encoding will cause all webhook requests to be rejected even if\nthe signature is mathematically correct.\n","enum":["hex","base64"]},"key":{"type":"string","description":"Specifies the secret key used to verify cryptographic signatures in incoming webhooks.\n\n**Field behavior**\n\nThis field provides the shared secret for signature verification:\n\n- REQUIRED when verify=\"hmac\" or verify=\"publickey\"\n- Contains the secret value known to both sender and receiver\n- Used with the incoming payload to validate the signature\n- Highly sensitive security credential\n\n**Implementation guidance**\n\n**For hmac verification**\n\nThe key is used in the following verification process:\n1. The webhook receives an incoming request with a signature\n2. The system computes an HMAC of the request body using this key and the specified algorithm\n3. This computed signature is compared with the signature from the request header\n4. If they match exactly, the request is authenticated and processed\n\n**Security best practices**\n\nFor maximum security:\n- Use a random, high-entropy key (32+ characters)\n- Include a mix of characters and avoid dictionary words\n- Never share this key in code repositories or logs\n- Rotate keys periodically for sensitive integrations\n- Use environment variables or secure credential storage\n\n**Common implementations**\n\n```\n\"key\": \"whsec_3a7c4f8b2e9d1a5c6b3e7d9f2a1c5b8e3a7c4f8b\"\n```\n\n```\n\"key\": \"sk_test_51LZIr9B9Y6YIwSKx8647589JKhdjs889KJsk389\"\n```\n\nIMPORTANT: This key should be treated as a highly sensitive credential,\nsimilar to a private key or password. It should never be exposed publicly\nor logged in application logs.\n"},"header":{"type":"string","description":"Specifies the HTTP header name that contains the signature for HMAC verification.\n\n**Field behavior**\n\nThis field identifies where to find the signature in incoming requests:\n\n- REQUIRED when verify=\"hmac\"\n- Must exactly match the header name used by the webhook sender\n- Case-insensitive (HTTP headers are not case-sensitive)\n\n**Common header patterns**\n\n**Platform-specific headers**\n\nMany platforms use standardized header names for their signatures:\n\n```\n\"header\": \"X-Shopify-Hmac-SHA256\"  // For Shopify webhooks\n```\n\n```\n\"header\": \"X-Hub-Signature-256\"    // For GitHub webhooks\n```\n\n```\n\"header\": \"Stripe-Signature\"        // For Stripe webhooks\n```\n\n**Generic signature headers**\n\nFor custom implementations or less common platforms:\n\n```\n\"header\": \"X-Webhook-Signature\"     // Common generic format\n```\n\n```\n\"header\": \"X-Signature\"             // Simplified format\n```\n\n**Implementation notes**\n\n- The system will look for this exact header name in incoming requests\n- If the header is not found, the request will be rejected with a 401 error\n- Some platforms may include a prefix in the header value (e.g., \"sha256=\")\n  which is handled automatically by the system\n\nIMPORTANT: This must exactly match the header name used by the webhook sender.\nIf you're unsure about the correct header name, consult the sender's documentation\nor use a tool like cURL with verbose output to inspect an example request.\n"},"path":{"type":"string","description":"Specifies the location of the verification token in incoming webhook requests.\n\n**Field behavior**\n\nThis field determines where to find the token for verification:\n\n- REQUIRED when verify=\"token\"\n- Defines a JSON path to locate the token in the request body\n- For query parameters, use the appropriate path format (typically at root level)\n\n**Implementation patterns**\n\n**Token in request body**\n\nFor tokens embedded in JSON payloads:\n\n```\n\"path\": \"meta.token\"        // For { \"meta\": { \"token\": \"xyz123\" } }\n```\n\n```\n\"path\": \"verification.key\"  // For { \"verification\": { \"key\": \"xyz123\" } }\n```\n\n**Token at root level**\n\nFor tokens in the top level of the request:\n\n```\n\"path\": \"token\"             // For { \"token\": \"xyz123\", \"data\": {...} }\n```\n\n**Token in query parameters**\n\nFor tokens sent as URL query parameters, use the parameter name:\n\n```\n\"path\": \"verify_token\"      // For /webhook?verify_token=xyz123\n```\n\n**Verification process**\n\n1. The webhook receives an incoming request\n2. The system uses this path to extract the token value\n3. The extracted value is compared with the configured token\n4. If they match exactly, the request is processed\n\nIMPORTANT: The path is case-sensitive and must exactly match the structure\nof incoming requests. For query parameters, the system automatically checks\nboth the body and query string using the provided path.\n"},"username":{"type":"string","description":"Specifies the username for webhook HTTP Basic Authentication security.\n\n**Field behavior**\n\nThis field defines one half of the Basic Authentication credentials:\n\n- REQUIRED when verify=\"basic\"\n- Used in conjunction with the password field\n- Case-sensitive string value\n- Encoded in the standard HTTP Basic Authentication format\n\n**Implementation notes**\n\nWhen Basic Authentication is used, the webhook requires incoming requests to include\nan Authorization header containing \"Basic \" followed by a base64-encoded string of\n\"username:password\".\n\nExample header:\n```\nAuthorization: Basic d2ViaG9va191c2VyOndlYmhvb2tfcGFzc3dvcmQ=\n```\n\nWhere \"d2ViaG9va191c2VyOndlYmhvb2tfcGFzc3dvcmQ=\" is the base64 encoding of\n\"webhook_user:webhook_password\".\n\n**Security considerations**\n\nBasic Authentication:\n- Is widely supported by HTTP clients and servers\n- Should ONLY be used over HTTPS to prevent credential interception\n- Provides a simple authentication mechanism but without integrity verification\n- Is less secure than HMAC verification for webhook scenarios\n\nIMPORTANT: Always use strong, unique credentials rather than generic or easily\nguessable values. Basic Authentication is less secure than HMAC for webhooks\nbut can be appropriate for simple scenarios or when working with systems that\ndon't support more advanced verification methods.\n"},"password":{"type":"string","description":"Specifies the password for webhook HTTP Basic Authentication security.\n\n**Field behavior**\n\nThis field defines the second half of the Basic Authentication credentials:\n\n- REQUIRED when verify=\"basic\"\n- Used in conjunction with the username field\n- Case-sensitive string value\n- Encoded in the standard HTTP Basic Authentication format\n\n**Implementation notes**\n\nThis password is combined with the username and encoded in base64 format for\nthe HTTP Authorization header. The webhook verifies that incoming requests contain\nthe correct encoded credentials before processing them.\n\n**Security best practices**\n\nFor maximum security:\n- Use a strong, randomly generated password (16+ characters)\n- Include a mix of uppercase, lowercase, numbers, and special characters\n- Don't reuse passwords from other systems\n- Avoid dictionary words or predictable patterns\n- Rotate passwords periodically for sensitive integrations\n\nIMPORTANT: This password should be treated as a sensitive credential.\nNever share it in public repositories, documentation, or logs. Always use\nHTTPS for webhooks using Basic Authentication to prevent credential interception.\n"},"successStatusCode":{"type":"integer","description":"Specifies the HTTP status code sent back to webhook callers after successful processing.\n\n**Field behavior**\n\nThis field controls the HTTP response status code:\n\n- OPTIONAL: Defaults to 204 (No Content) if not specified\n- Affects how webhook callers interpret the success response\n- Must be a valid HTTP status code in the 2xx range\n\n**Common status codes**\n\n**204 No Content (Default)**\n```\n\"successStatusCode\": 204\n```\n- Returns no response body\n- Most efficient option as it minimizes response size\n- Appropriate when the caller doesn't need confirmation details\n- Automatically disables successBody (even if specified)\n\n**200 ok**\n```\n\"successStatusCode\": 200\n```\n- Standard success response\n- Allows returning a response body with details\n- Most widely used and recognized success code\n- Compatible with all HTTP clients\n\n**202 Accepted**\n```\n\"successStatusCode\": 202\n```\n- Indicates request was accepted for processing but may not be complete\n- Appropriate for asynchronous processing scenarios\n- Signals that the webhook was received but full processing is pending\n\n**Implementation considerations**\n\nThe appropriate status code depends on your webhook caller's expectations:\n\n- Some systems require specific status codes to consider the delivery successful\n- If the caller retries on anything other than 2xx, use 200 or 202\n- If the caller needs confirmation details, use 200 with a response body\n- If efficiency is paramount, use 204 (default)\n\nIMPORTANT: When using 204 No Content, any successBody configuration will be ignored\nas this status code specifically indicates no response body is being returned.\n","default":204},"successBody":{"type":"string","description":"Specifies the HTTP response body sent back to webhook callers after successful processing.\n\n**Field behavior**\n\nThis field controls the content returned to the webhook caller:\n\n- OPTIONAL: Defaults to empty (no body) if not specified\n- Ignored when successStatusCode is 204 (No Content)\n- Content type is determined by the successMediaType field\n- Can contain static text or structured data (JSON, XML)\n\n**Implementation patterns**\n\n**Simple acknowledgment**\n```\n\"successBody\": \"OK\"\n```\n- Minimal plaintext response\n- Confirms receipt without details\n- Most efficient for basic acknowledgment\n\n**Structured response (JSON)**\n```\n\"successBody\": \"{\\\"success\\\":true,\\\"message\\\":\\\"Webhook received\\\"}\"\n```\n- Provides structured data about the result\n- Can include more detailed status information\n- Compatible with programmatic processing by the caller\n- Remember to escape quotes in JSON strings\n\n**Custom confirmation**\n```\n\"successBody\": \"{\\\"status\\\":\\\"received\\\",\\\"timestamp\\\":\\\"{{currentDateTime}}\\\"}\"\n```\n- Can include dynamic values using handlebars templates\n- Useful for providing receipt confirmation with metadata\n\n**Response flow**\n\nThe response body is sent after the webhook payload has been:\n1. Received and authenticated\n2. Validated against any configured requirements\n3. Accepted for processing by the system\n\nIMPORTANT: The successBody will only be returned if successStatusCode is NOT 204.\nIf you want to return a body, make sure to set successStatusCode to 200, 201, or 202.\n"},"successMediaType":{"type":"string","description":"Specifies the Content-Type header for successful webhook responses.\n\n**Field behavior**\n\nThis field controls how the response body is interpreted:\n\n- OPTIONAL: Defaults to \"json\" if not specified\n- Only relevant when returning a successBody and not using status code 204\n- Determines the Content-Type header in the HTTP response\n- Must be consistent with the actual format of the successBody\n\n**Media type options**\n\n**Json (Default)**\n```\n\"successMediaType\": \"json\"\n```\n- Sets Content-Type: application/json\n- Use when successBody contains valid JSON\n- Most common for API responses\n- Allows structured data that clients can parse programmatically\n\n**Xml**\n```\n\"successMediaType\": \"xml\"\n```\n- Sets Content-Type: application/xml\n- Use when successBody contains valid XML\n- Necessary for systems expecting XML responses\n- Less common in modern APIs but still used in some enterprise systems\n\n**Plain Text**\n```\n\"successMediaType\": \"plaintext\"\n```\n- Sets Content-Type: text/plain\n- Use for simple string responses\n- Most compatible option for basic acknowledgments\n- Appropriate when successBody is just \"OK\" or similar\n\n**Implementation considerations**\n\n- The media type must match the actual content format in successBody\n- If returning JSON in successBody, use \"json\" (most common)\n- If returning a simple text acknowledgment, use \"plaintext\"\n- If the caller specifically requires XML, use \"xml\"\n\nIMPORTANT: When successStatusCode is 204 (No Content), this field has no effect\nsince no body is returned, and therefore no Content-Type is needed.\n","default":"json","enum":["json","xml","plaintext"]},"successResponseHeaders":{"type":"array","description":"Defines custom HTTP headers to include in successful webhook responses.\n\n**Field behavior**\n\nThis field allows additional HTTP headers in the response:\n\n- OPTIONAL: If omitted, only standard headers are included\n- Each entry defines a name/value pair for a single header\n- Applied to all successful responses (regardless of status code)\n- Can override standard headers like Content-Type\n\n**Implementation patterns**\n\n**Standard use cases**\n\nCustom headers are useful for:\n- Providing metadata about the response\n- Enabling CORS for browser-based webhook callers\n- Including tracking or correlation IDs\n- Adding custom security headers\n\n**Common header examples**\n\nCORS support:\n```json\n[\n  {\"name\": \"Access-Control-Allow-Origin\", \"value\": \"*\"},\n  {\"name\": \"Access-Control-Allow-Methods\", \"value\": \"POST, OPTIONS\"}\n]\n```\n\nRequest tracking:\n```json\n[\n  {\"name\": \"X-Request-ID\", \"value\": \"{{jobId}}\"},\n  {\"name\": \"X-Webhook-Received\", \"value\": \"{{currentDateTime}}\"}\n]\n```\n\nCustom application headers:\n```json\n[\n  {\"name\": \"X-API-Version\", \"value\": \"1.0\"},\n  {\"name\": \"X-Processing-Status\", \"value\": \"accepted\"}\n]\n```\n\n**Technical details**\n\n- Header names are case-insensitive as per HTTP specification\n- Some headers like Content-Type can be set via other fields (successMediaType)\n- Headers defined here take precedence over automatically set headers\n- The values can contain handlebars expressions for dynamic content\n\nIMPORTANT: Be careful when setting security-related headers like\nAccess-Control-Allow-Origin, as improper values could create security vulnerabilities.\n","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}}},"challengeResponseHeaders":{"type":"array","description":"Defines custom HTTP headers to include in webhook challenge responses.\n\n**Field behavior**\n\nThis field configures headers for subscription verification:\n\n- OPTIONAL: If omitted, only standard headers are included\n- Only used for webhook verification/challenge requests\n- Each entry defines a name/value pair for a single header\n- Particularly important for platforms requiring specific verification headers\n\n**Challenge verification context**\n\nMany webhook providers implement a verification process:\n1. Before sending real events, they send a \"challenge\" request\n2. The webhook must respond with specific headers and/or body content\n3. Only after successful verification will real webhook events be sent\n\nThis field allows customizing the headers sent during this verification step.\n\n**Common patterns by platform**\n\n**Facebook/Instagram**\n```json\n[\n  {\"name\": \"Content-Type\", \"value\": \"text/plain\"}\n]\n```\n\n**Slack**\n```json\n[\n  {\"name\": \"Content-Type\", \"value\": \"application/json\"}\n]\n```\n\n**Custom implementations**\n```json\n[\n  {\"name\": \"X-Challenge-Response\", \"value\": \"passed\"},\n  {\"name\": \"X-Verification-Status\", \"value\": \"success\"}\n]\n```\n\nIMPORTANT: The specific headers required vary by platform. Consult the webhook\nprovider's documentation for the exact verification requirements. Incorrect challenge\nresponse headers may prevent successful webhook subscription.\n","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}}},"challengeSuccessBody":{"type":"string","description":"Specifies the HTTP response body for webhook challenge/verification requests.\n\n**Field behavior**\n\nThis field defines the verification response content:\n\n- OPTIONAL: If omitted, a default empty response is sent\n- Only used for webhook subscription verification requests\n- Content type is determined by the challengeSuccessMediaType field\n- Often needs to contain specific values expected by the webhook provider\n\n**Verification patterns by platform**\n\nDifferent webhook providers implement different verification mechanisms:\n\n**Facebook/Instagram**\n                \"challengeSuccessBody\": \"{{hub.challenge}}\"\n```\n- Must echo back the challenge value sent in the request\n- Uses handlebars expression to access the challenge parameter\n\n**Slack**\n```\n\"challengeSuccessBody\": \"{\\\"challenge\\\":\\\"{{challenge}}\\\"}\"\n```\n- Returns the challenge value in a JSON structure\n- Required for Slack's Events API verification\n\n**Generic challenge-response**\n```\n\"challengeSuccessBody\": \"{\\\"verified\\\":true,\\\"timestamp\\\":\\\"{{currentDateTime}}\\\"}\"\n```\n- Simple confirmation response for custom implementations\n- Can include additional metadata as needed\n\n**Implementation considerations**\n\n- The exact format is dictated by the webhook provider's requirements\n- Some platforms require echoing back specific request parameters\n- Others require a structured response with specific fields\n- Handlebars expressions ({{variable}}) can access request data\n\nIMPORTANT: Incorrect challenge responses will prevent webhook subscription verification.\nAlways consult the webhook provider's documentation for exact requirements.\n"},"challengeSuccessStatusCode":{"type":"integer","description":"Specifies the HTTP status code for webhook challenge/verification responses.\n\n**Field behavior**\n\nThis field controls the verification response status:\n\n- OPTIONAL: Defaults to 200 (OK) if not specified\n- Only used for webhook subscription verification requests\n- Must match what the webhook provider expects for successful verification\n- Most platforms require a 200 OK response\n\n**Common status codes for verification**\n\n**200 ok (Default)**\n```\n\"challengeSuccessStatusCode\": 200\n```\n- Standard success response\n- Most webhook platforms expect this status code\n- Generally the safest option for verification\n\n**201 Created**\n```\n\"challengeSuccessStatusCode\": 201\n```\n- Used by some systems to indicate subscription was created\n- Less common for verification but used in some custom implementations\n\n**Platform-specific requirements**\n\nMost major webhook providers require specific status codes:\n\n- Facebook/Instagram: 200\n- Slack: 200\n- GitHub: 200\n- Shopify: 200\n- Stripe: 200\n\nIMPORTANT: Using the wrong status code will cause the verification to fail.\nIf you're unsure, keep the default 200 status code, as it's the most widely\naccepted for webhook verifications.\n","default":200},"challengeSuccessMediaType":{"type":"string","description":"Specifies the Content-Type header for webhook challenge/verification responses.\n\n**Field behavior**\n\nThis field controls the challenge response format:\n\n- OPTIONAL: Defaults to \"json\" if not specified\n- Only used for webhook subscription verification requests\n- Determines the Content-Type header in the verification response\n- Must match the format of the challengeSuccessBody content\n\n**Common media types for verification**\n\n**Json (Default)**\n```\n\"challengeSuccessMediaType\": \"json\"\n```\n- Sets Content-Type: application/json\n- Required by Slack and many modern webhook providers\n- Use when returning structured verification data\n\n**Plain Text**\n```\n\"challengeSuccessMediaType\": \"plaintext\"\n```\n- Sets Content-Type: text/plain\n- Required by Facebook/Instagram webhook verification\n- Use when the challenge response is a simple string\n\n**Xml**\n```\n\"challengeSuccessMediaType\": \"xml\"\n```\n- Sets Content-Type: application/xml\n- Less common but used by some enterprise systems\n- Use only when the webhook provider specifically requires XML\n\n**Platform-specific requirements**\n\n- Facebook/Instagram: plaintext (when echoing hub.challenge)\n- Slack: json (for Events API verification)\n- Most modern APIs: json\n\nIMPORTANT: The media type must match both the format of your challengeSuccessBody\nand the requirements of the webhook provider. Mismatched content types can cause\nverification to fail even if the response body is correct.\n","default":"json","enum":["json","xml","plaintext"]}}},"Distributed":{"type":"object","description":"Configuration object for distributed exports that require authentication.\n\nThis object contains authentication credentials needed for distributed processing.\n","properties":{"bearerToken":{"type":"string","description":"Bearer token for authenticating distributed export requests.\n\n**Field behavior**\n\nThis token provides authentication for the distributed export:\n\n- Required for secure access to distributed endpoints\n- Must be a valid bearer token format\n- Used in Authorization header as \"Bearer {token}\"\n- Should be kept secure and rotated regularly\n\n**Implementation guidance**\n\n**Token management**\n\n- Store tokens securely (encrypted at rest)\n- Implement token rotation policies\n- Monitor token expiration dates\n- Use environment variables for token storage\n\n**Security considerations**\n\n- Never log bearer tokens in plain text\n- Implement proper access controls\n- Use HTTPS for all token transmissions\n- Validate tokens on each request\n\nIMPORTANT: Bearer tokens provide full access to the distributed export.\nTreat them as sensitive credentials.\n","format":"password"}},"required":["bearerToken"],"additionalProperties":false},"FileSystem":{"type":"object","description":"Configuration for FileSystem exports","properties":{"directoryPath":{"type":"string","description":"Directory path to retrieve files from (required)"}},"required":["directoryPath"]},"File":{"type":"object","description":"Configuration for file processing in exports. This object defines how files are parsed,\nfiltered, and processed across all file-based export operations within Celigo.\n\n**Export contexts**\n\nThis schema applies to multiple file-based export scenarios:\n\n1. **Source System Types**:\n    - Simple exports with file uploads through the UI\n    - HTTP exports retrieving files from web sources\n    - FTP/SFTP exports downloading files from servers\n    - Amazon S3\n    - Azure Blob Storage\n    - Google Cloud Storage\n    - And other file-based source systems\n\n**Implementation guidelines**\n\nAI agents should consider these key decision points when configuring file processing for exports:\n\n1. **File Format Selection**: Set the `type` field to match the format of the files being processed\n    (csv, json, xlsx, xml). This determines which format-specific configuration object to populate.\n\n2. **Processing Mode**: Set the `output` field based on whether you need to:\n    - Parse file contents into records (`\"records\"`)\n    - Transfer files without parsing (`\"blobKeys\"`)\n    - Only retrieve metadata about files (`\"metadata\"`)\n\n3. **File Filtering**: Use the `filter` object to selectively process files based on criteria\n    like file names, sizes, or custom logic.\n\n4. **Format-Specific Configuration**: Configure the corresponding object (csv, json, xlsx, xml)\n    based on the selected file type.\n\n**Field dependencies**\n\n- When `type` = \"csv\", configure the `csv` object\n- When `type` = \"json\", configure the `json` object\n- When `type` = \"xlsx\", configure the `xlsx` object\n- When `type` = \"xml\", configure the `xml` object\n- When `type` = \"filedefinition\", configure the `fileDefinition` object\n\n**Export-specific considerations**\n\nWhile the file processing configuration remains consistent, different export types may have\nadditional requirements:\n\n- **HTTP Exports**: May need authentication and specific endpoint configurations\n- **FTP/SFTP Exports**: Require server credentials and path information\n- **Cloud Storage Exports**: Need bucket/container details and access credentials\n\nThe File schema focuses specifically on how files are processed once they are\nretrieved from the source system, regardless of which export type is used.\n","properties":{"encoding":{"type":"string","description":"Character encoding used for reading and parsing file content. This setting is critical for ensuring proper character interpretation, especially for international data and special characters.\n\n**Encoding options and usage guidance**\n\n**Utf-8 (`\"utf8\"`)**\n- **Default Setting**: Used if no encoding is specified\n- **Best For**: Modern text files, international character sets, XML/JSON files\n- **Compatibility**: Universally supported; standard for web applications\n- **When to Use**: First choice for most new integrations; handles most languages\n\n**Windows-1252 (`\"win1252\"`)**\n- **Best For**: Legacy Windows system files, older Western European data\n- **Compatibility**: Common in Windows-based exports, especially older systems\n- **When to Use**: When files originate from older Windows systems or contain certain special characters not rendering properly in utf8\n\n**Utf-16le (`\"utf-16le\"`)**\n- **Best For**: Unicode text with extensive character requirements\n- **Compatibility**: Microsoft Word documents, some database exports\n- **When to Use**: When files have Byte Order Mark (BOM) or are known to be 16-bit Unicode\n\n**Gb18030 (`\"gb18030\"`)**\n- **Best For**: Chinese character sets\n- **Compatibility**: Official character set standard for China\n- **When to Use**: For files containing simplified or traditional Chinese characters\n\n**Mac Roman (`\"macroman\"`)**\n- **Best For**: Legacy Mac system files (pre-OS X)\n- **Compatibility**: Older Apple systems and applications\n- **When to Use**: For older files created on Apple systems\n\n**Iso-8859-1 (`\"iso88591\"`)**\n- **Best For**: Western European languages\n- **Compatibility**: Widely supported in older systems\n- **When to Use**: For legacy European language content\n\n**Shift jis (`\"shiftjis\"`)**\n- **Best For**: Japanese character sets\n- **Compatibility**: Common in Japanese Windows and older systems\n- **When to Use**: For files containing Japanese text\n\n**Implementation guidance for ai agents**\n\n1. **Detection Strategy**: If encoding is unknown, first try utf8 (default), then try win1252 for Western language files with errors\n\n2. **Encoding Selection Process**:\n    - Check source system documentation for encoding specifications\n    - For files with corrupt/missing characters, test alternative encodings\n    - Consider geographic origin of data (Asian languages often require specific encodings)\n\n3. **Common Issues to Watch For**:\n    - Mojibake (garbled text): Indicates wrong encoding selection\n    - Question marks or boxes: Character conversion failures\n    - BOM markers appearing as visible characters: Consider utf-16le\n","enum":["utf8","win1252","utf-16le","gb18030","macroman","iso88591","shiftjis"]},"type":{"type":"string","description":"Defines the format of the files being processed. REQUIRED for all file-based exports except blob exports (export type \"blob\" or file output \"blobKeys\").\n\nThis field creates a critical dependency that determines which format-specific configuration object must be populated.\n\n**Format options and requirements**\n\n**Csv Files (`\"csv\"`)**\n- **Use For**: Tabular data with delimiter-separated values\n- **Required Config**: The `csv` object with settings like delimiters and header options\n- **Best For**: Simple tabular data, exports from spreadsheets, flat data structures\n- **Example Sources**: Exported reports, data extracts, simple database dumps\n\n**Json Files (`\"json\"`)**\n- **Use For**: Hierarchical data in JavaScript Object Notation\n- **Required Config**: The `json` object, especially the `resourcePath` to locate records\n- **Best For**: Nested data structures, API responses, complex object representations\n- **Example Sources**: REST APIs, document databases, configuration files\n\n**Excel Files (`\"xlsx\"`)**\n- **Use For**: Microsoft Excel spreadsheets\n- **Required Config**: The `xlsx` object with Excel-specific settings\n- **Best For**: Business reports, formatted tabular data, multi-sheet documents\n- **Example Sources**: Financial reports, manually created spreadsheets\n\n**Xml Files (`\"xml\"`)**\n- **Use For**: Extensible Markup Language documents\n- **Required Config**: The `xml` object, critically the `resourcePath` using XPath\n- **Best For**: Document-oriented data, SOAP responses, EDI formats\n- **Example Sources**: SOAP APIs, legacy system exports, industry standard formats\n\n**File Definition (`\"filedefinition\"`)**\n- **Use For**: Complex proprietary formats requiring custom parsing logic\n- **Required Config**: The `fileDefinition` object with the _fileDefinitionId\n- **Best For**: Legacy formats, fixed-width files, complex multi-record formats\n- **Example Sources**: Mainframe exports, proprietary formats, EDI documents\n\n**Implementation guidance**\n\n1. Determine the file format from the source system or documentation\n2. Select the matching type from the enum values\n3. Configure ONLY the corresponding format-specific object\n4. Other format-specific objects will be ignored\n\nFor AI agents: This field creates a critical dependency chain - selecting a type\ncommits you to using the corresponding configuration object.\n","enum":["csv","json","xlsx","xml","filedefinition"]},"output":{"type":"string","description":"Defines the fundamental processing mode for file data. This critical field determines how files are handled and what data is passed to subsequent flow steps.\n\n**Processing modes**\n\n**Content Processing (`\"records\"`)**\n- **Behavior**: Files are parsed into structured records based on their format\n- **Use When**: You need to access and manipulate the data inside files\n- **Output**: Array of record objects reflecting the file's content\n- **Example Flow**: CSV files → Parse into records → Transform → Import to target system\n- **Best For**: Data synchronization, ETL processes, content-based workflows\n- **Technical Impact**: Requires format-specific parsing; higher processing overhead\n\n**File Transfer (`\"blobKeys\"`)**\n- **Behavior**: Files are treated as binary objects and transferred without parsing\n- **Use When**: You need to move files between systems without modifying content\n- **Output**: References to the binary file objects (blobKeys)\n- **Example Flow**: Image files → Transfer as blobs → Upload to cloud storage\n- **Best For**: Binary files, images, documents, any non-textual content\n- **Technical Impact**: Lower processing overhead; maintains file integrity\n\n**File Discovery (`\"metadata\"`)**\n- **Behavior**: Only file metadata is retrieved (name, size, dates) without content\n- **Use When**: You need to inventory files before deciding which to process\n- **Output**: Array of file metadata objects\n- **Example Flow**: Scan FTP folder → Get metadata → Filter by date → Process selected files\n- **Best For**: File inventory, selective processing, large directory scanning\n- **Technical Impact**: Minimal processing overhead; fastest operation mode\n\n**Implementation guidance**\n\nThis setting profoundly affects flow architecture:\n\n1. For data integration: Use `\"records\"` to work with the file contents\n2. For file movement: Use `\"blobKeys\"` to preserve binary integrity\n3. For file discovery: Use `\"metadata\"` as a first step before selective processing\n\nAI agents should select this value based on whether the integration needs to\nwork with the file's content or just move/manage the files themselves.\n","enum":["records","metadata","blobKeys"]},"skipDelete":{"type":"boolean","description":"Controls whether source files are retained or deleted after successful processing. This setting has significant implications for data lifecycle management and system storage.\n\n**Behavior**\n\n- **When true**: Source files remain on the file server after processing\n- **When false** (default): Source files are automatically deleted after successful processing\n- **Error Handling**: Files are only deleted after SUCCESSFUL processing; failed files remain intact\n\n**Decision factors for ai agents**\n\nConsider recommending `skipDelete: true` when:\n\n1. **Compliance Requirements**:\n    - Regulatory frameworks require source file retention (GDPR, HIPAA, SOX)\n    - Audit trails need to maintain original file evidence\n    - Data retention policies mandate preserving source files\n\n2. **Operational Needs**:\n    - Files need to be processed by multiple different flows\n    - Source files serve as disaster recovery backups\n    - Re-processing might be required (for testing or validation)\n    - Source systems do not maintain their own copy of the files\n\nConsider recommending `skipDelete: false` (default) when:\n\n1. **Storage Optimization**:\n    - Working with large files that would consume significant storage\n    - High volume of files processed frequently\n    - Files are already backed up elsewhere\n    - Storage costs are a concern\n\n2. **Security Considerations**:\n    - Files contain sensitive data that should be minimized\n    - \"Clean workspace\" policies are in place\n    - Source files represent a potential security liability\n\n**Implementation guidance**\n\n- **Storage Planning**: When `skipDelete: true`, ensure sufficient storage is available for file accumulation\n- **File Organization**: Consider implementing an archiving strategy for retained files\n- **Monitoring**: Set up space monitoring when retaining files to prevent storage exhaustion\n- **Cleanup Automation**: If files must be retained but eventually deleted, consider a separate cleanup job\n\n**Integration patterns**\n\n- **Multi-stage Processing**: Set to `true` for files that need multi-step processing in separate flows\n- **Extract-Transform-Archive**: Set to `true` when original files need archiving after extraction\n- **Single-use Import**: Set to `false` for one-time imports where originals have no further value\n\n**Technical considerations**\n\nThis setting only affects the source file server. Records extracted from the files and processed through the flow are not affected by this setting - they continue through your integration regardless of this value.\n"},"compressionFormat":{"type":"string","description":"Specifies the compression format of the files being processed. This setting enables the system to automatically decompress files before parsing their contents.\n\n**Compression options**\n\n**Gzip (`\"gzip\"`)**\n- **File Extension**: Typically .gz, .gzip\n- **Characteristics**: Single-file compression, maintains original file name in metadata\n- **Compression Ratio**: Moderate to high, depends on file type (5-75% size reduction)\n- **Common Sources**: Linux/Unix systems, database exports, API response payloads\n- **Use Cases**: Individual file transfers, API response handling, log files\n\n**Zip (`\"zip\"`)**\n- **File Extension**: .zip\n- **Characteristics**: Archive format that can contain multiple files/directories\n- **Compression Ratio**: Moderate (usually 30-60% size reduction)\n- **Common Sources**: Windows systems, manual exports, email attachments\n- **Use Cases**: Multi-file packages, email attachments, mixed-format content\n\n**Implementation guidance for ai agents**\n\n**When to Configure Compression**\n\n1. **Source System Behavior**:\n    - Set when the source system always delivers compressed files\n    - Leave blank when files are delivered uncompressed\n    - NEVER set when files are sometimes compressed, sometimes not (this will cause errors)\n\n2. **Selection Criteria**:\n    - Examine file extensions (.zip, .gz) in the source system\n    - Check source system documentation for compression specifications\n    - Consider typical OS of the source (.zip for Windows, .gz for Unix/Linux)\n\n3. **Multi-file Considerations**:\n    - For .zip files containing multiple files, all files will be processed individually\n    - For nested compression (e.g., .gz files inside .zip), only the outer compression is handled\n\n**Technical Implementation Notes**\n\n- **Processing Flow**: Files are decompressed before any other processing (parsing, filtering)\n- **Filename Handling**: After decompression, the contained file name(s) are used for subsequent operations\n- **Error Conditions**: If a file is marked as compressed but isn't actually compressed, processing will fail\n- **Mixed Formats**: If files might arrive in different compression formats, use separate exports for each format\n\n**Performance Considerations**\n\n- **Network Efficiency**: Compressed files reduce transfer time from source to integration platform\n- **Processing Overhead**: Decompression adds slight processing time but saves network time\n- **Storage Impact**: Compressed files use less storage in transit but are decompressed for processing\n\nLeave this field blank if files are not compressed. Setting an incorrect compression format will cause processing errors.\n","enum":["gzip","zip"]},"purgeInternalBackup":{"type":"boolean","description":"Controls whether Celigo's internal backup system retains copies of processed files. This setting affects data retention, recovery capabilities, and compliance posture.\n\n**Behavior**\n\n- **When false** (default): Celigo maintains copies of all processed files for the duration of your account's retention policy\n- **When true**: Celigo will NOT keep internal backup copies of files processed by this specific export\n- **Scope**: This setting applies only to this specific export configuration; other exports are unaffected\n\n**Decision factors for ai agents**\n\n**Scenarios to Recommend `purgeInternalBackup: true`**\n\n1. **Data Sensitivity Requirements**:\n    - Files contain highly sensitive information (PII, PHI, financial, etc.)\n    - Data residency/sovereignty requirements prohibit additional copies\n    - Zero-retention policies mandate immediate deletion after processing\n    - Compliance frameworks require minimizing data copies (GDPR, HIPAA)\n\n2. **Technical Considerations**:\n    - Very large files where storage costs are significant\n    - Files that are already reliably backed up in source systems\n    - Files with very short-lived relevance (e.g., temporary processing files)\n    - Processing of non-production/test data that doesn't require retention\n\n**Scenarios to Recommend `purgeInternalBackup: false` (Default)**\n\n1. **Recovery Requirements**:\n    - Files represent critical business data with recovery needs\n    - Source systems don't maintain reliable backups\n    - Reprocessing capabilities are needed for disaster recovery\n    - Audit trails require evidence of processed files\n\n2. **Operational Benefits**:\n    - Troubleshooting integration issues requires access to source files\n    - Files might need reprocessing in case of downstream errors\n    - Historical analysis or validation may be required\n    - Protection against source system data loss\n\n**Implementation guidance**\n\n**Governance Considerations**\n\n- **Data Lifecycle**: Setting to `true` permanently removes files from Celigo after processing\n- **Recovery Impact**: Without backups, recovery from certain errors may require re-obtaining files from source systems\n- **Audit Trail**: Consider if processed files need to be available for future audits or investigations\n\n**Best Practices**\n\n- **Document Decision**: When setting to `true`, document the rationale for disabling backups\n- **Retention Alignment**: Ensure this setting aligns with overall data retention policies\n- **Risk Assessment**: Evaluate recovery needs against data minimization requirements\n- **Consistency**: Apply consistent backup settings across similar data types\n\n**System Impact**\n\nThis setting does NOT affect:\n- The processing of files during integration runs\n- Source files on their original servers (see `skipDelete` for that)\n- Storage of processed data records in the target system\n\nIt ONLY controls whether Celigo maintains internal copies of the original files.\n"},"decrypt":{"type":"string","description":"Specifies the decryption method to apply to incoming files before processing. This setting enables handling of encrypted files that require decryption before their contents can be parsed.\n\n**Supported encryption**\n\n**Pgp/gpg Encryption (`\"pgp\"`)**\n- **File Extensions**: Typically .pgp, .gpg, or .asc\n- **Encryption Standard**: OpenPGP (RFC 4880)\n- **Key Requirements**: Private key must be configured on the connection\n- **Common Sources**: Secure file transfers, encrypted backups, confidential data exchanges\n\n**Implementation requirements**\n\n1. **Connection Configuration Prerequisites**:\n    - This field assumes the connection has already been configured with appropriate cryptographic settings\n    - Private key must be uploaded to the connection configuration\n    - Passphrase (if applicable) must be configured on the connection\n    - For asymmetric encryption, the corresponding public key must have been used to encrypt the files\n\n2. **File Processing Flow**:\n    - Encrypted files are first retrieved from the source\n    - Decryption is applied using the configured connection's cryptographic settings\n    - After successful decryption, normal file processing continues (parsing, filtering, etc.)\n    - If decryption fails, the file processing will error out completely\n\n**Guidance for ai agents**\n\n**When to Configure Decryption**\n\n1. **Security Requirements**:\n    - Set to \"pgp\" when source files are PGP/GPG encrypted\n    - Required for end-to-end encrypted data transfers\n    - Common in financial, healthcare, and other industries with sensitive data\n    - Essential for compliance with certain data protection regulations\n\n2. **Technical Indicators**:\n    - File extensions indicate encryption (.pgp, .gpg, .asc)\n    - Source system documentation mentions PGP encryption\n    - Files cannot be opened with standard text editors\n    - Source system provides a public key for encryption\n\n**Implementation Considerations**\n\n- **Key Management**: Ensure private keys are securely stored and properly configured\n- **Error Handling**: Decryption failures will cause the entire file processing to fail\n- **Performance Impact**: Decryption adds processing overhead before file parsing begins\n- **Debugging Challenges**: Encrypted files cannot be easily examined for troubleshooting\n\n**Security Best Practices**\n\n- **Key Rotation**: Recommend periodic key rotation according to security policies\n- **Passphrase Protection**: Use strong passphrases for private keys when possible\n- **Access Control**: Limit access to connections with decryption capabilities\n- **Audit Logging**: Enable detailed logging for decryption operations when available\n\n**Integration with other settings**\n\n- If files are both encrypted AND compressed, decryption happens before decompression\n- Subsequent processing (based on file type settings) occurs after decryption\n- Internal backups (controlled by purgeInternalBackup) store the decrypted files unless configured otherwise\n\nCurrently, only PGP/GPG encryption is supported. For other encryption methods, custom preprocessing may be required.\n","enum":["pgp"]},"batchSize":{"type":"integer","description":"Controls the number of files processed in a single batch operation. This setting allows fine-tuning of performance and resource utilization during file processing.\n\n**Behavior and purpose**\n\n- **Function**: Limits the number of files processed in a single batch request\n- **Default**: If not specified, the system uses a default batch size based on file type\n- **Maximum**: 1000 files per batch (hard system limit)\n- **Impact**: Affects performance, memory usage, and error resilience, but NOT total processing capacity\n\n**Performance optimization guidance**\n\n**Large File Optimization (Set Lower Values: 10-50)**\n\nWhen working with large files (>10MB each), smaller batch sizes are recommended:\n\n- **Network Benefits**: Reduces timeout risks during file transfer\n- **Memory Usage**: Prevents excessive memory consumption\n- **Error Isolation**: Limits the impact of processing failures\n- **Example Scenarios**: Document processing, image files, complex spreadsheets\n\n```\n\"batchSize\": 20  // Good setting for large PDF or image files\n```\n\n**Small File Optimization (Set Higher Values: 100-1000)**\n\nWhen working with small files (<1MB each), larger batch sizes improve efficiency:\n\n- **Throughput**: Processes more files with less overhead\n- **API Efficiency**: Reduces the number of API calls\n- **Resource Utilization**: Maximizes processing efficiency\n- **Example Scenarios**: Small CSV files, transaction records, simple data files\n\n```\n\"batchSize\": 500  // Efficient for small data files\n```\n\n**Implementation guidance for ai agents**\n\n**Recommendation Framework**\n\n1. **File Size Assessment**:\n    - For files averaging >10MB: Recommend 10-20\n    - For files averaging 1-10MB: Recommend 20-100\n    - For files averaging <1MB: Recommend 100-500\n    - For very small files (<100KB): Consider maximum (1000)\n\n2. **Reliability Factors**:\n    - For critical data with no retry capability: Recommend lower values\n    - For unstable network connections: Recommend lower values\n    - For production environments: Start conservative (lower) and increase based on performance\n    - For development/testing: Can use higher values for efficiency\n\n3. **System Constraints**:\n    - Consider available memory in the integration environment\n    - Evaluate network bandwidth and stability\n    - Account for source system rate limits or concurrent connection limits\n\n**Technical considerations**\n\n- **Error Handling**: If a batch fails, only that batch is retried (not individual files)\n- **Parallelism**: Batch size affects concurrent processing but within system limits\n- **Monitoring**: Larger batch sizes make monitoring individual file progress more difficult\n- **Resource Scaling**: Higher batch sizes require more memory but can complete faster\n\n**Relationship to other settings**\n\n- This setting controls file retrieval batching, not record processing batch size\n- Works in conjunction with compression and decryption settings\n- Separate from and complementary to the main flow's pageSize setting\n\nConsider starting with more conservative (lower) values and increasing based on performance monitoring.\n","maximum":1000},"sortByFields":{"type":"array","description":"Allows you to sort all records in a file before processing them. This configuration enables deterministic ordering of records, which can be critical for maintaining data consistency and enabling specific processing patterns.\n\n**Functionality overview**\n\n- **Purpose**: Establishes a specific processing order for records within files\n- **Timing**: Sorting is applied after file parsing but before any filtering or grouping\n- **Scope**: Affects only the in-memory representation of records (doesn't modify source files)\n- **Performance**: Has computational cost proportional to number of records × log(number of records)\n\n**Strategic uses for ai agents**\n\n**Business Process Optimization**\n\n1. **Chronological Processing**:\n    - Sort by date/timestamp fields to process events in time order\n    - Essential for financial transactions, audit logs, event sequences\n    - Example: `[{\"field\": \"transactionDate\", \"descending\": false}]`\n\n2. **Hierarchical Data Handling**:\n    - Sort by parent records before children\n    - Ensures referential integrity in relational data\n    - Example: `[{\"field\": \"parentId\", \"descending\": false}, {\"field\": \"lineNumber\", \"descending\": false}]`\n\n3. **Priority-Based Processing**:\n    - Sort by importance/priority fields to handle critical items first\n    - Useful for SLA-driven processes, tiered operations\n    - Example: `[{\"field\": \"priority\", \"descending\": true}, {\"field\": \"createdDate\", \"descending\": false}]`\n\n**Technical Optimization**\n\n1. **Grouping Efficiency**:\n    - Sorting by the same fields used in groupByFields improves grouping performance\n    - Reduces memory usage when processing large files\n    - Example: `[{\"field\": \"customerId\", \"descending\": false}]` with corresponding groupByFields\n\n2. **Lookup Optimization**:\n    - Sorting by reference fields enhances performance of subsequent lookups\n    - Minimizes database calls by enabling batch lookups\n    - Example: `[{\"field\": \"productSku\", \"descending\": false}]`\n\n3. **Error Reduction**:\n    - Sorting can ensure dependencies are processed in correct order\n    - Reduces failures from out-of-sequence processing\n    - Example: `[{\"field\": \"sequenceNumber\", \"descending\": false}]`\n\n**Implementation guidance**\n\n**Field Selection Considerations**\n\n- **Data Type Compatibility**: Fields must contain comparable values (dates, numbers, strings)\n- **Nulls Handling**: Null values are typically sorted last (after all non-null values)\n- **Nested Fields**: Use dot notation for accessing nested properties (`customer.region`)\n- **Performance Impact**: Each additional sort field increases computational cost\n\n**Common Implementation Patterns**\n\n```json\n// Simple single-field ascending sort (most common)\n[\n  {\"field\": \"orderDate\", \"descending\": false}\n]\n\n// Multi-field sort with primary and secondary criteria\n[\n  {\"field\": \"region\", \"descending\": false},\n  {\"field\": \"revenue\", \"descending\": true}\n]\n\n// Descending priority sort with tie-breaker\n[\n  {\"field\": \"priority\", \"descending\": true},\n  {\"field\": \"createdDate\", \"descending\": false}\n]\n```\n\n**Limitations and Constraints**\n\n- Sorting large datasets has memory implications; consider record volume\n- Maximum recommended number of sort fields: 3-5 (performance considerations)\n- Sorting effectiveness depends on data consistency in source files\n- Complex sorting logic might be better implemented in custom scripts\n","items":{"type":"object","properties":{"field":{"type":"string","description":"Specifies the record field to use as a sort key. This field name identifies which property of each record will be used for comparison when establishing processing order.\n\n**Field selection guidelines**\n\n**Data Type Considerations**\n\n- **Date/Time Fields**: Provide chronological sorting (`createdDate`, `timestamp`)\n- **Numeric Fields**: Enable quantitative ordering (`amount`, `sequenceNumber`, `priority`)\n- **String Fields**: Sort alphabetically (`name`, `status`, `category`)\n- **Boolean Fields**: Group records by true/false values (`isActive`, `isProcessed`)\n\n**Accessing Field Paths**\n\n- **Top-level Properties**: Direct field names (`orderNumber`, `date`)\n- **Nested Objects**: Use dot notation (`customer.name`, `address.country`)\n- **Array Elements**: Not directly supported in basic sorting; use preprocessing\n\n**Common Field Patterns by Domain**\n\n1. **Order Processing**:\n    - `orderDate`, `orderNumber`, `customerId`, `lineNumber`\n\n2. **Financial Data**:\n    - `transactionDate`, `accountNumber`, `amount`, `documentNumber`\n\n3. **Customer Records**:\n    - `lastName`, `firstName`, `customerType`, `region`\n\n4. **Inventory/Products**:\n    - `productCategory`, `itemNumber`, `stockLevel`, `reorderDate`\n\n5. **Event Logs**:\n    - `timestamp`, `severity`, `eventType`, `sourceSystem`\n\n**Implementation notes**\n\n- Field names are case-sensitive\n- Fields must exist in all records (or have consistent representation when missing)\n- Non-existent fields or null values are typically sorted last\n- Maximum recommended field name length: 64 characters\n"},"descending":{"type":"boolean","description":"Controls the sort direction for the specified field. This setting determines whether records will be arranged in ascending (lowest to highest) or descending (highest to lowest) order.\n\n**Behavior**\n\n- **When false or omitted**: Sorts in ascending order (A→Z, 0→9, oldest→newest)\n- **When true**: Sorts in descending order (Z→A, 9→0, newest→oldest)\n\n**Strategic direction selection**\n\n**Ascending Order (descending: false)**\n\nRecommended for:\n- Chronological event processing (earliest first)\n- Sequential operations with dependencies\n- Reference data that builds on previous records\n- Incremental ID or sequence numbers\n\nExample use cases:\n- Processing dated transactions in chronological order\n- Handling items in order of creation\n- Incrementally building state that depends on previous records\n\n**Descending Order (descending: true)**\n\nRecommended for:\n- Priority-based processing (highest first)\n- Recent-first temporal processing\n- Most significant items first\n- Limited processing where only top N items matter\n\nExample use cases:\n- Processing high-priority items before low-priority\n- Handling most recent updates first\n- Focusing on highest-value transactions first\n\n**Implementation patterns**\n\n**Single Field Direction**\n\n```json\n{\"field\": \"createdDate\", \"descending\": false}  // Oldest first\n{\"field\": \"createdDate\", \"descending\": true}   // Newest first\n```\n\n**Mixed Directions in Multi-field Sorts**\n\n```json\n// Group by category (A→Z) but show highest priority first in each category\n[\n  {\"field\": \"category\", \"descending\": false},\n  {\"field\": \"priority\", \"descending\": true}\n]\n```\n\n**Technical considerations**\n\n- Default value is `false` if omitted (ascending sort)\n- For date fields, ascending means oldest first\n- For numeric fields, ascending means smallest first\n- For string fields, ascending means alphabetical order\n"}}}},"groupByFields":{"$ref":"#/components/schemas/GroupBy"},"csv":{"type":"object","description":"Configuration settings for parsing CSV (Comma-Separated Values) files. This object defines how the system interprets delimited text files, handling variations in format, structure, and content.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"csv\". This configuration is required for properly parsing:\n- Standard CSV files (.csv)\n- Tab-delimited files (.tsv, .tab)\n- Other character-delimited files (semicolon, pipe, etc.)\n- Fixed-width text files converted to delimited format\n\n**Implementation strategy for ai agents**\n\n1. **Format Analysis**:\n    - Examine sample files to identify delimiter pattern\n    - Check for presence/absence of header row\n    - Look for whitespace or quote pattern inconsistencies\n    - Identify any rows that should be skipped (headers, metadata, etc.)\n\n2. **Configuration Priority**:\n    - `columnDelimiter`: Most critical setting; incorrect delimiter causes parsing failures\n    - `hasHeaderRow`: Affects field mapping and identification\n    - `rowDelimiter`: Usually auto-detected but important for non-standard files\n    - `trimSpaces`: Important for inconsistent formatting\n    - `rowsToSkip`: Necessary when files contain metadata/comments before data\n\n3. **Common File Source Patterns**:\n\n    | Source System | Typical Delimiter | Header Row | Common Issues |\n    |--------------|-------------------|------------|---------------|\n    | Excel (US)   | Comma (,)         | Yes        | Quoted fields with embedded commas |\n    | Excel (EU)   | Semicolon (;)     | Yes        | Decimal separator conflicts |\n    | Legacy Systems | Pipe (\\|) or Tab | Varies     | Inconsistent field counts |\n    | POS Systems  | Comma or Tab      | Often No   | Trailing delimiters |\n    | ERP Exports  | Varies widely     | Usually Yes| Fixed field counts with padding |\n\n**Error prevention**\n\n- **Misaligned Columns**: Usually caused by incorrect delimiter or quotes handling\n- **Truncated Data**: Can result from wrong row delimiter settings\n- **Field Misinterpretation**: Often caused by incorrect header row setting\n- **Character Encoding Issues**: Address with the parent `encoding` setting\n- **Whitespace Problems**: Resolve with `trimSpaces` setting\n\n**Optimization opportunities**\n\n- For maximum parsing speed, set only the minimal required settings\n- For problematic files with inconsistent formatting, use more restrictive settings\n- Balance between permissive parsing (more data accepted) and strict validation (cleaner data)\n","properties":{"columnDelimiter":{"type":"string","description":"Specifies the character sequence that separates individual fields (columns) within each row of the CSV file.\n\n**Behavior**\n\n- Controls how the parser identifies individual fields in each row\n- Default value: comma (,) if not specified\n- Special characters may need to be escaped\n\n**Common delimiter patterns**\n\n**Standard csv (`,`)**\n```\n\"columnDelimiter\": \",\"\n```\n- Most common format in US/UK systems\n- Default for most spreadsheet exports\n- Used by: Microsoft Excel (US), Google Sheets, many database exports\n\n**European csv (`;`)**\n```\n\"columnDelimiter\": \";\"\n```\n- Common in European locales where comma is the decimal separator\n- Standard format in many EU countries\n- Used by: Microsoft Excel (many EU locales), European business systems\n\n**Tab-Delimited (`\\t`)**\n```\n\"columnDelimiter\": \"\\t\"\n```\n- Used for tab-separated values (TSV) files\n- Better for data containing commas\n- Used by: Database exports, scientific data, legacy systems\n\n**Other Common Delimiters**\n- Pipe: `\"columnDelimiter\": \"|\"` (used in mainframes, legacy systems)\n- Colon: `\"columnDelimiter\": \":\"` (less common, specialized formats)\n- Space: `\"columnDelimiter\": \" \"` (uncommon, problematic with text fields)\n\n**Determination strategy for ai agents**\n\n1. **File Extension Check**:\n    - .csv → Usually comma (,)\n    - .tsv → Always tab (\\t)\n    - .txt → Could be any delimiter; needs inspection\n\n2. **Source System Analysis**:\n    - EU-based systems often use semicolon (;)\n    - Legacy/mainframe systems often use pipe (|)\n    - Scientific/statistical data often uses tab (\\t)\n\n3. **File Content Inspection**:\n    - Open file in text editor to identify separating character\n    - Check for character frequency patterns\n    - Look for consistent character between data elements\n\n4. **System Documentation**:\n    - Check export settings in source system\n    - Review file specifications if available\n\n**Implementation notes**\n\n- For tab delimiter, use `\"\\t\"` (escape sequence for tab)\n- If file contains the delimiter within text fields, ensure proper quoting\n- Multi-character delimiters are supported but rare\n- Setting the wrong delimiter is the most common parsing error\n"},"rowDelimiter":{"type":"string","description":"Specifies the character sequence that indicates the end of each record (row) in the CSV file.\n\n**Behavior**\n\n- Controls how the parser identifies the boundaries between records\n- Default: Auto-detect (system attempts to determine from file content)\n- Common values: newline (`\\n`), carriage return + newline (`\\r\\n`)\n\n**Common row delimiter patterns**\n\n**Windows-Style (`\\r\\n`)**\n```\n\"rowDelimiter\": \"\\r\\n\"\n```\n- CRLF (Carriage Return + Line Feed) sequence\n- Standard for files created on Windows systems\n- Used by: Microsoft Office, Windows-based applications\n\n**Unix-Style (`\\n`)**\n```\n\"rowDelimiter\": \"\\n\"\n```\n- LF (Line Feed) character only\n- Standard for files created on Unix/Linux/macOS (modern) systems\n- Used by: Linux applications, macOS applications, web exports\n\n**Classic Mac-Style (`\\r`)**\n```\n\"rowDelimiter\": \"\\r\"\n```\n- CR (Carriage Return) character only\n- Legacy format used by older Mac systems (pre-OSX)\n- Rare in modern files but still found in some legacy systems\n\n**When to specify explicitly**\n\nIn most cases, the auto-detection works well, but explicitly set this when:\n\n1. **Mixed Line Endings**: Files containing inconsistent line ending styles\n2. **Custom Record Separators**: Files using unconventional record delimiters\n3. **Parsing Errors**: When auto-detection fails to correctly separate records\n4. **Performance Optimization**: To avoid detection overhead in high-volume processing\n\n**Determination strategy for ai agents**\n\n1. **Source System Analysis**:\n    - Windows systems typically use `\\r\\n`\n    - Unix/Linux/macOS typically use `\\n`\n    - Web downloads could use either format\n\n2. **Troubleshooting Guidance**:\n    - If records are merged or split incorrectly, check for proper row delimiter\n    - If file opens correctly in text editor but parsing fails, row delimiter may be the issue\n    - For files with unusual record counts, examine row delimiter setting\n\n**Implementation notes**\n\n- Use escape sequences (`\\n`, `\\r\\n`, `\\r`) to represent control characters\n- Setting incorrect row delimiter may result in merged records or split records\n- When in doubt, leave unspecified to use auto-detection\n- Multi-character delimiters beyond standard line endings are supported but rare\n"},"hasHeaderRow":{"type":"boolean","description":"Indicates whether the CSV file contains a header row with field names as the first row.\n\n**Behavior**\n\n- **When true** (default): First row is treated as field names, not data\n- **When false**: All rows including the first are treated as data records\n- Impacts field mapping, validation, and record counting\n\n**Implementation impact**\n\n**With Header Row (true)**\n\n- Field names from the header row can be referenced in mappings\n- Record count excludes the header row\n- First row of data is the second physical row in the file\n- Provides self-documenting data structure\n\n**Without Header Row (false)**\n\n- Fields are referenced by position/index (e.g., Column1, Column2)\n- Record count includes all rows in the file\n- First row of data is the first physical row in the file\n- Requires external schema or position-based mapping\n\n**Determination strategy for ai agents**\n\n1. **Visual Inspection**:\n    - Check if the first row contains descriptive labels rather than actual data\n    - Look for data type consistency (headers are typically text, while data may be mixed)\n    - Headers often use camelCase, PascalCase, or snake_case formatting\n\n2. **Source System Analysis**:\n    - Most business systems include headers by default\n    - Legacy/mainframe systems may omit headers\n    - Data extracts intended for human use typically include headers\n\n3. **Content Patterns**:\n    - Headers typically don't match the pattern of subsequent data rows\n    - Headers often contain special characters not found in data (spaces, symbols)\n    - Data rows typically have consistent patterns while headers may differ\n\n**Common configurations by source**\n\n| Source Type | Typical Setting | Notes |\n|-------------|-----------------|-------|\n| Business Reports | true | Headers provide field context |\n| Database Exports | true | Column names as headers |\n| Legacy System Feeds | false | Often position-based fixed formats |\n| IoT/Sensor Data | false | Often compact, headerless formats |\n| Manual Data Entry | true | Helps maintain field alignment |\n\n**Best practices**\n\n- Always explicitly set this value rather than relying on the default\n- For data without headers, consider adding them in preprocessing if possible\n- When headers exist but should be ignored, use `hasHeaderRow: true` and `rowsToSkip: 1`\n- Document field positions when working with headerless files\n"},"trimSpaces":{"type":"boolean","description":"Controls whether leading and trailing whitespace should be removed from field values during parsing.\n\n**Behavior**\n\n- **When true**: Removes all leading and trailing whitespace from each field value\n- **When false** (default): Preserves all whitespace in field values exactly as in the source\n- Applies to data fields only; header row values are always trimmed regardless of this setting\n\n**Implementation impact**\n\n**With Trimming Enabled (true)**\n\n- More consistent data for comparison and matching operations\n- Prevents issues with invisible whitespace affecting equality checks\n- Reduces storage space for text-heavy datasets\n- Helps normalize data from inconsistent sources\n\n**With Trimming Disabled (false)**\n\n- Preserves exact data as represented in the source file\n- Required when whitespace is semantically meaningful\n- Maintains original field lengths exactly\n- Necessary for certain data validation scenarios\n\n**Usage guidance for ai agents**\n\n**Recommend `trimSpaces: true` when**\n\n1. **Data Consistency Issues**:\n    - Source systems are known to have inconsistent spacing\n    - Data will be used for matching or comparison operations\n    - Files are generated by multiple different systems\n    - Human-entered data is present (prone to spacing errors)\n\n2. **Data Type Considerations**:\n    - Fields contain numeric values (where spaces are not meaningful)\n    - Fields contain codes, IDs, or reference values\n    - Fields will be used in lookups or joins\n    - Normalization is more important than exact representation\n\n**Recommend `trimSpaces: false` when**\n\n1. **Data Fidelity Requirements**:\n    - Working with fixed-width fields where spaces matter\n    - Dealing with formatted data where spacing is semantic\n    - Legal or compliance scenarios requiring exact preservation\n    - Scientific data where precision of representation matters\n\n2. **Content Characteristics**:\n    - Working with text fields where leading/trailing spaces could be intentional\n    - Processing creative content, addresses, or formatted text\n    - Source system uses space padding for alignment purposes\n\n**Implementation notes**\n\n- This setting affects all fields consistently (cannot be applied to select fields)\n- Only affects leading and trailing spaces, not spaces between words\n- Has no effect on empty fields (empty remains empty)\n- For selective trimming, use transformation rules after parsing\n"},"rowsToSkip":{"type":"integer","description":"Specifies the number of rows at the beginning of the file to ignore before starting data processing.\n\n**Behavior**\n\n- Skips the specified number of rows from the beginning of the file\n- These rows are completely ignored and not processed as data\n- The header row (if present) is counted after the skipped rows\n- Default value is 0 (no rows skipped)\n\n**Implementation impact**\n\n**Common Use Cases**\n\n1. **Metadata Headers**:\n    - Skip report titles, generated timestamps, system information\n    - Skip explanatory text at the beginning of files\n    - Skip company letterhead or report identification rows\n\n2. **Multi-Header Files**:\n    - Skip category headers or section titles\n    - Skip nested headers or hierarchy information\n    - Skip column grouping indicators\n\n3. **Technical Requirements**:\n    - Skip binary file markers or encoding identifiers\n    - Skip non-data content like instructions or disclaimers\n    - Skip inconsistent early rows before standardized data begins\n\n**Calculation guidance for ai agents**\n\nWhen determining the correct value for `rowsToSkip`:\n\n1. **Count from Zero**:\n    - Row 1 = 0, Row 2 = 1, Row 3 = 2, etc.\n\n2. **For Files with Headers**:\n    - Set rowsToSkip = (first data row position - 1) - (hasHeaderRow ? 1 : 0)\n    - Example: If data starts on row 5, and file has a header row:\n      rowsToSkip = (5 - 1) - 1 = 3\n\n3. **For Files without Headers**:\n    - Set rowsToSkip = (first data row position - 1)\n    - Example: If data starts on row 3, and file has no header row:\n      rowsToSkip = (3 - 1) = 2\n\n**Determination strategy**\n\n1. **Visual Inspection**:\n    - Open file in text editor and count non-data rows at the top\n    - Identify the first row containing actual data values\n    - Note if a header row exists separately from skipped content\n\n2. **Common Patterns by Source**:\n    - ERP Reports: Often 2-5 rows of report metadata\n    - Exported Spreadsheets: May have title rows, date stamps\n    - Database Extracts: Usually minimal (0-1) skipped rows\n    - Legacy Systems: May have control records or job information\n\n**Implementation notes**\n\n- Setting too high skips valid data; setting too low includes non-data as records\n- When in doubt, visually inspect the file to confirm correct skip count\n- Remember that header row (if hasHeaderRow=true) is counted AFTER skipped rows\n- Maximum recommended value: 100 (larger values may indicate format misunderstanding)\n"},"disableQuoteAndStripEnclosingQuotes":{"type":"boolean","description":"Controls the handling of quoted fields in CSV files, specifically how the parser manages quotation marks around field values.\n\n**Behavior**\n\n- **When false** (default): Standard CSV quoting rules are applied\n    - Quotation marks around fields protect embedded delimiters\n    - Parser intelligently handles escaped quotes within quoted fields\n    - Follows RFC 4180 CSV specifications for quote handling\n\n- **When true**: Quote detection and processing is disabled\n    - All quotes are treated as literal characters, not field delimiters\n    - Any quotes surrounding field values are removed\n    - Embedded delimiters in quoted fields will cause field splitting\n\n**Implementation impact**\n\n**Standard Quote Handling (false)**\n\nExample input: `\"Smith, John\",42,\"Notes with \"\"quotes\"\" inside\"`\n\nResult:\n- Field 1: `Smith, John` (comma preserved inside quotes)\n- Field 2: `42`\n- Field 3: `Notes with \"quotes\" inside` (embedded quotes normalized)\n\n**Disabled Quote Handling (true)**\n\nExample input: `\"Smith, John\",42,\"Notes with \"\"quotes\"\" inside\"`\n\nResult:\n- Field 1: `\"Smith`\n- Field 2: ` John\"`\n- Field 3: `42`\n- Field 4: `\"Notes with \"\"quotes\"\" inside\"`\n\n**Usage guidance for ai agents**\n\n**Recommend `disableQuoteAndStripEnclosingQuotes: true` when**\n\n1. **Quote-Related Parsing Problems**:\n    - Files contain inconsistent or malformed quote usage\n    - Source system doesn't follow standard CSV quoting rules\n    - Quotes appear as literal data rather than field delimiters\n    - Quotes are present but delimiters are never embedded in fields\n\n2. **Special Data Formats**:\n    - Working with custom delimited formats that don't use quotes for escaping\n    - Files use alternate escaping mechanisms for embedded delimiters\n    - Source system adds quotes to all fields regardless of content\n\n**Recommend `disableQuoteAndStripEnclosingQuotes: false` (default) when**\n\n1. **Standard CSV Compliance**:\n    - Files follow RFC 4180 or similar CSV standards\n    - Fields contain embedded delimiters that must be preserved\n    - Quotes are used properly to enclose fields with special characters\n    - Source is a standard database, spreadsheet, or business system export\n\n2. **Data Content Characteristics**:\n    - Fields contain embedded commas, newlines, or other delimiters\n    - Text fields might contain quotation marks as part of the content\n    - Preserving the exact structure of complex text fields is important\n\n**Troubleshooting indicators**\n\nConsider changing this setting when encountering these issues:\n\n- Field counts vary unexpectedly between rows\n- Text with embedded delimiters is being split into multiple fields\n- Quotes appearing at the beginning and end of every field in the result\n- Extra quote characters appearing within field values\n\n**Implementation notes**\n\n- This setting significantly changes parsing behavior - test thoroughly\n- Affects all fields in the file consistently\n- Incorrect setting can cause severe data misalignment\n- When field count inconsistency occurs, review this setting first\n"}}},"json":{"type":"object","description":"Configuration settings for parsing JSON (JavaScript Object Notation) files. This object defines how the system interprets and processes hierarchical data contained in JSON-formatted files.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"json\". This configuration is required for properly parsing:\n- Standard JSON files (.json)\n- JSON data exports from APIs or databases\n- JSON Lines format (newline-delimited JSON)\n- Nested or hierarchical data structures\n\n**Json parsing characteristics**\n\n- **Hierarchical Data**: JSON naturally supports nested objects and arrays\n- **Type Preservation**: Numbers, booleans, nulls, and strings are correctly typed\n- **Flexible Structure**: Can handle varying record structures\n- **Tree Navigation**: Supports complex object traversal via path expressions\n\n**Implementation strategy for ai agents**\n\n1. **Data Structure Analysis**:\n    - Examine sample files to understand the object hierarchy\n    - Identify where the actual records/rows are located in the structure\n    - Determine if records are at the root or nested within containers\n    - Check for array structures that contain the target records\n\n2. **Common JSON Data Patterns**:\n\n    **Root Array Pattern**\n    ```json\n    [\n      {\"id\": 1, \"name\": \"Product 1\"},\n      {\"id\": 2, \"name\": \"Product 2\"}\n    ]\n    ```\n    - Records are directly at the root as an array\n    - No resourcePath needed (leave blank)\n    - Most straightforward structure for processing\n\n    **Container Object Pattern**\n    ```json\n    {\n      \"data\": [\n        {\"id\": 1, \"name\": \"Product 1\"},\n        {\"id\": 2, \"name\": \"Product 2\"}\n      ],\n      \"metadata\": {\n        \"count\": 2,\n        \"page\": 1\n      }\n    }\n    ```\n    - Records are in an array inside a container object\n    - Requires resourcePath (e.g., \"data\")\n    - Common in API responses with metadata\n\n    **Nested Container Pattern**\n    ```json\n    {\n      \"response\": {\n        \"results\": [\n          {\"id\": 1, \"name\": \"Product 1\"},\n          {\"id\": 2, \"name\": \"Product 2\"}\n        ],\n        \"pagination\": {\n          \"nextPage\": 2\n        }\n      },\n      \"status\": \"success\"\n    }\n    ```\n    - Records are deeply nested in the hierarchy\n    - Requires dot notation in resourcePath (e.g., \"response.results\")\n    - Common in complex API responses\n\n**Error prevention**\n\n- **Invalid Path**: Incorrectly specified resourcePath results in zero records found\n- **Type Mismatch**: resourcePath must point to an array of objects for proper record processing\n- **Empty Results**: If path resolves to null or non-existent field, no error is thrown but no records are processed\n- **Parsing Failures**: Malformed JSON will cause the entire file processing to fail\n\n**Optimization opportunities**\n\n- For large JSON files, consider preprocessing to extract only relevant sections\n- For files with complex structures, validate the resourcePath with sample data\n- When processing API responses, coordinate resourcePath with the API documentation\n- For very large datasets, consider using streaming JSON parsing (NDJSON format)\n","properties":{"resourcePath":{"type":"string","description":"Specifies the path to the array of records within the JSON structure. This field helps the system locate and extract the target records when they're nested inside a larger JSON object hierarchy.\n\n**Behavior**\n\n- **Purpose**: Identifies where the array of records is located in the JSON structure\n- **Format**: Dot notation path to navigate nested objects (e.g., \"data.records\")\n- **When Empty**: System expects records to be at the root level as an array\n- **Result**: Array found at this path is processed as individual records\n\n**Path notation guidelines**\n\n**Basic Path Patterns**\n\n- **Root Level Array**: Leave empty or null when records are a direct array at root\n- **Single Level Nesting**: Use the property name (e.g., \"data\", \"results\", \"items\")\n- **Multi-Level Nesting**: Use dot notation (e.g., \"response.data.items\")\n\n**Path Construction Rules**\n\n1. **Object Navigation**:\n    - Use dots to traverse object properties: \"parent.child.grandchild\"\n    - Each segment must be a valid property name in the JSON\n\n2. **Target Requirements**:\n    - The path MUST resolve to an array of objects\n    - Each object in the array will be processed as one record\n    - The array must be the final element in the path\n\n3. **Limitations**:\n    - Array indexing is not supported in the path (e.g., \"data[0]\")\n    - Wildcard selectors are not supported\n    - Regular expressions are not supported\n\n**Determination strategy for ai agents**\n\nTo identify the correct resourcePath:\n\n1. **Examine Sample Data**:\n    - Open a sample JSON file or API response\n    - Locate the array containing the actual data records\n    - Note the full path from root to this array\n\n2. **Common Patterns by Source**:\n\n    | Source Type | Common Paths | Example |\n    |-------------|--------------|---------|\n    | REST APIs | \"data\", \"results\", \"items\" | \"data\" |\n    | Complex APIs | \"response.data\", \"data.items\" | \"response.data\" |\n    | Database Exports | \"rows\", \"records\", \"exports\" | \"rows\" |\n    | CRM Systems | \"contacts\", \"accounts\", \"opportunities\" | \"contacts\" |\n    | Analytics APIs | \"data.rows\", \"response.data.rows\" | \"data.rows\" |\n\n3. **Verification Approach**:\n    - The path should resolve to an array (typically with square brackets in the JSON)\n    - Each element in this array should represent one complete record\n    - The array should not be a property array (like tags or categories)\n\n**Implementation examples**\n\n**Root Array (No Path Needed)**\n\nJSON Structure:\n```json\n[\n  {\"id\": 1, \"name\": \"Record 1\"},\n  {\"id\": 2, \"name\": \"Record 2\"}\n]\n```\n\nConfiguration:\n```json\n\"resourcePath\": \"\"  // or omit entirely\n```\n\n**Single-Level Nesting**\n\nJSON Structure:\n```json\n{\n  \"orders\": [\n    {\"id\": \"A001\", \"customer\": \"John\"},\n    {\"id\": \"A002\", \"customer\": \"Jane\"}\n  ],\n  \"count\": 2\n}\n```\n\nConfiguration:\n```json\n\"resourcePath\": \"orders\"\n```\n\n**Multi-Level Nesting**\n\nJSON Structure:\n```json\n{\n  \"response\": {\n    \"data\": {\n      \"customers\": [\n        {\"id\": 1, \"name\": \"Acme Corp\"},\n        {\"id\": 2, \"name\": \"Globex Inc\"}\n      ]\n    },\n    \"status\": \"success\"\n  }\n}\n```\n\nConfiguration:\n```json\n\"resourcePath\": \"response.data.customers\"\n```\n\n**Troubleshooting indicators**\n\nIf you encounter these issues, review the resourcePath setting:\n\n- Export completes successfully but processes 0 records\n- \"Cannot read property 'forEach' of undefined\" errors\n- \"Expected array but got object/string/number\" errors\n- Records appear flattened or with unexpected structure\n\n**Best practices**\n\n- Always verify the path with sample data before deployment\n- Use the simplest path that reaches the target array\n- Document the expected JSON structure alongside the configuration\n- For APIs with changing response structures, implement validation checks\n"}}},"xlsx":{"type":"object","description":"Configuration settings for parsing Microsoft Excel (XLSX) files. This object defines how the system interprets and extracts data from Excel workbooks, handling their unique structures and formatting.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"xlsx\". This configuration is required for properly parsing:\n- Modern Excel files (.xlsx) using the Open XML format\n- Excel workbooks with multiple sheets\n- Files exported from Microsoft Excel or compatible applications\n- Spreadsheet data with formatting, formulas, or multiple worksheets\n\n**Excel parsing characteristics**\n\n- **Multiple Worksheets**: Can access data from specific sheets within workbooks\n- **Cell Formatting**: Handles various data types (text, numbers, dates, etc.)\n- **Formula Resolution**: Retrieves calculated values rather than formulas\n- **Data Extraction**: Converts tabular Excel data to structured records\n\n**Implementation strategy for ai agents**\n\n1. **File Analysis**:\n    - Determine if the source file is an actual .xlsx format (not .xls, .csv, etc.)\n    - Identify which worksheet contains the target data\n    - Check for header rows, merged cells, or other special formatting\n    - Note any preprocessing required (hidden rows, filtered data, etc.)\n\n2. **Common Excel File Patterns**:\n\n    **Standard Data Table**\n    - Data organized in clear rows and columns\n    - First row contains headers\n    - No merged cells or complex formatting\n    - Most straightforward to process\n\n    **Report-Style Workbook**\n    - Contains titles, headers, and possibly footers\n    - May have merged cells for headings\n    - Could have multiple tables on a single sheet\n    - May require specific sheet selection or row skipping\n\n    **Multi-Sheet Workbook**\n    - Data distributed across multiple worksheets\n    - May require multiple export configurations\n    - Often needs sheet name specification (via pre-processing)\n    - Common in financial or complex business reports\n\n**Limitations and considerations**\n\n- **Hidden Data**: Hidden rows/columns are still processed unless filtered\n- **Formatting Loss**: Visual formatting and styles are ignored\n- **Formula Handling**: Only calculated values are extracted, not formulas\n- **Non-Tabular Data**: Pivot tables and non-tabular layouts may cause issues\n- **Large Files**: Very large Excel files may require additional memory\n\n**Error prevention**\n\n- **Format Compatibility**: Ensure the file is modern .xlsx format, not legacy .xls\n- **Data Structure**: Verify data is in a consistent tabular format\n- **Special Characters**: Watch for special characters in header rows\n- **Empty Sheets**: Check that target worksheets contain actual data\n\n**Optimization opportunities**\n\n- For complex workbooks, consider pre-processing to simplify structure\n- For large files, extract only necessary worksheets/ranges before processing\n- When possible, use files with consistent tabular layouts\n- Consider converting Excel data to CSV format for simpler processing\n","properties":{"hasHeaderRow":{"type":"boolean","description":"Indicates whether the Excel file contains a header row with field names as the first row.\n\n**Behavior**\n\n- **When true** (default): First row is treated as field names, not data\n- **When false**: All rows including the first are treated as data records\n- Impacts field mapping, validation, and record counting\n\n**Implementation impact**\n\n**With Header Row (true)**\n\n- Field names from the header row can be referenced in mappings\n- Record count excludes the header row\n- First row of data is the second physical row in the spreadsheet\n- Column names are derived from the first row text values\n- Blank header cells may be auto-named (Column1, Column2, etc.)\n\n**Without Header Row (false)**\n\n- Fields are referenced by position/Excel column letters (A, B, C, etc.)\n- Record count includes all rows in the sheet\n- First row of data is the first physical row in the spreadsheet\n- Requires external schema or position-based mapping\n- All fields are given generic names (Column1, Column2, etc.)\n\n**Determination strategy for ai agents**\n\nTo determine if a header row exists and should be configured:\n\n1. **Visual Inspection**:\n    - Open the Excel file and examine the first row\n    - Look for descriptive labels rather than actual data values\n    - Check for formatting differences between the first row and others\n    - Header rows often use bold formatting or different background colors\n\n2. **Content Analysis**:\n    - Headers typically contain text while data rows may contain mixed types\n    - Headers often use naming conventions (camelCase, Title Case, etc.)\n    - Headers don't follow the pattern/format of subsequent data rows\n    - Headers rarely contain numeric-only values (unless they're codes)\n\n3. **Source Context**:\n    - Business reports almost always include headers\n    - Data exports from systems typically include column names\n    - Machine-generated data might skip headers\n    - Scientific or technical data sometimes omits headers\n\n**Usage guidance for ai agents**\n\n**Recommend `hasHeaderRow: true` when**\n\n1. **Standard Business Data**:\n    - Most business Excel files include headers\n    - Reports and exports from business systems\n    - Files intended for human readability\n    - When column names provide important context\n\n2. **Integration Requirements**:\n    - When field names are needed for mapping\n    - When data needs to be self-describing\n    - When header names match target system fields\n    - For maintaining field identity across systems\n\n**Recommend `hasHeaderRow: false` when**\n\n1. **Special Data Types**:\n    - Scientific or sensor data without labels\n    - Machine-generated output files\n    - Legacy system exports with position-based fields\n    - When all rows contain actual data values\n\n2. **Technical Scenarios**:\n    - When the first row contains required data\n    - When column positions are used for mapping\n    - When headers are inconsistent or misleading\n    - For maximum data extraction with minimal configuration\n\n**Implementation notes**\n\n- This setting affects all worksheets in multi-sheet processing\n- Excel column names with spaces or special characters may be normalized\n- Duplicate header names will be made unique with suffixes\n- Empty header cells will get automatically generated names\n- Maximum recommended header length: 64 characters\n- Consider pre-processing files without headers to add them for clarity\n"}}},"xml":{"type":"object","description":"Configuration settings for parsing XML (Extensible Markup Language) files. This object defines how the system navigates and extracts hierarchical data from XML documents, enabling processing of structured markup data.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"xml\". This configuration is required for properly parsing:\n- Standard XML files (.xml)\n- SOAP API responses and web service outputs\n- Industry-specific XML formats (EDI, NIEM, UBL, etc.)\n- Document-oriented data with hierarchical structure\n\n**Xml parsing characteristics**\n\n- **Hierarchical Structure**: Processes nested elements and attributes\n- **Schema Independence**: Works with or without formal XML schemas\n- **Node Selection**: Uses XPath to precisely target record elements\n- **Namespace Support**: Handles XML namespaces in complex documents\n\n**Implementation strategy for ai agents**\n\n1. **Document Analysis**:\n    - Examine the XML structure to identify repeating elements (records)\n    - Determine the hierarchical level where target records exist\n    - Identify any namespaces that must be addressed\n    - Note attributes vs. element content patterns\n\n2. **Common XML Data Patterns**:\n\n    **Simple Element List**\n    ```xml\n    <Records>\n      <Record id=\"1\">\n        <Name>Product 1</Name>\n        <Price>10.99</Price>\n      </Record>\n      <Record id=\"2\">\n        <Name>Product 2</Name>\n        <Price>20.99</Price>\n      </Record>\n    </Records>\n    ```\n    - Records are identical element types with similar structure\n    - Direct children of a container element\n    - XPath: `/Records/Record`\n\n    **Namespaced xml**\n    ```xml\n    <soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\">\n      <soap:Body>\n        <ns:GetCustomersResponse xmlns:ns=\"http://example.com/api\">\n          <ns:Customer id=\"1\">\n            <ns:Name>Acme Corp</ns:Name>\n          </ns:Customer>\n          <ns:Customer id=\"2\">\n            <ns:Name>Globex Inc</ns:Name>\n          </ns:Customer>\n        </ns:GetCustomersResponse>\n      </soap:Body>\n    </soap:Envelope>\n    ```\n    - Elements use XML namespaces\n    - Records are nested within service response structures\n    - XPath: `//ns:Customer` or `/soap:Envelope/soap:Body/ns:GetCustomersResponse/ns:Customer`\n\n    **Heterogeneous Records**\n    ```xml\n    <Feed>\n      <Entry type=\"product\">\n        <ProductId>123</ProductId>\n        <Name>Widget</Name>\n      </Entry>\n      <Entry type=\"category\">\n        <CategoryId>A5</CategoryId>\n        <Label>Supplies</Label>\n      </Entry>\n    </Feed>\n    ```\n    - Same element type may have different internal structures\n    - Usually identified by an attribute or child element type\n    - May require multiple export configurations\n    - XPath: `/Feed/Entry[@type=\"product\"]`\n\n**Xpath query formulation**\n\nXPath is a powerful language for selecting nodes in XML documents. When formulating a resourcePath:\n\n- **Absolute Paths** (starting with `/`): Select from the document root\n- **Relative Paths** (no leading `/`): Select from the current context\n- **Any-Level Selection** (`//`): Select matching nodes regardless of location\n- **Predicates** (`[]`): Filter elements based on attributes or content\n- **Attribute Selection** (`@`): Select attribute values instead of elements\n\n**Error prevention**\n\n- **Invalid XPath**: Test the resourcePath against sample data before deployment\n- **Namespace Issues**: Ensure proper namespace handling in complex documents\n- **Empty Results**: Verify that the XPath selects the intended nodes and not an empty set\n- **Encoding Problems**: Use the correct encoding setting for international content\n\n**Optimization opportunities**\n\n- For large XML files, use more specific XPaths to reduce processing overhead\n- For complex structures, consider preprocessing to simplify before parsing\n- For SOAP responses, extract just the response body before processing\n- For repeating integration, document the exact XPath with examples\n","properties":{"resourcePath":{"type":"string","description":"Specifies the XPath expression used to locate record elements within the XML document. This critical field determines which XML nodes are treated as individual records for processing.\n\n**Behavior**\n\n- **Purpose**: Identifies which elements in the XML represent individual records\n- **Format**: Uses XPath syntax to select nodes from the document structure\n- **Requirement**: MANDATORY for XML processing - no default value exists\n- **Result**: Each XML element matching the XPath is processed as one record\n\n**Xpath syntax guidance**\n\n**Core XPath Patterns**\n\n1. **Direct Child Selection** (`/Root/Element`):\n    ```xml\n    <Root>\n      <Element>Record 1</Element>\n      <Element>Record 2</Element>\n    </Root>\n    ```\n    - XPath: `/Root/Element`\n    - Selects elements that are direct children following exact path\n    - Most precise, requires exact hierarchy knowledge\n    - Recommended when structure is consistent and well-known\n\n2. **Any-Level Selection** (`//Element`):\n    ```xml\n    <Root>\n      <Section>\n        <Element>Record 1</Element>\n      </Section>\n      <Container>\n        <Element>Record 2</Element>\n      </Container>\n    </Root>\n    ```\n    - XPath: `//Element`\n    - Selects all matching elements regardless of location\n    - More flexible, works across varying structures\n    - Use when element hierarchy may vary or is unknown\n\n3. **Filtered Selection** (`//Element[@attr=\"value\"]`):\n    ```xml\n    <Root>\n      <Element type=\"product\">Record 1</Element>\n      <Element type=\"category\">Not a record</Element>\n      <Element type=\"product\">Record 2</Element>\n    </Root>\n    ```\n    - XPath: `//Element[@type=\"product\"]`\n    - Selects only elements matching both name and attribute criteria\n    - Precise targeting when elements have identifying attributes\n    - Useful for heterogeneous XML with type indicators\n\n**Advanced Selection Techniques**\n\n1. **Position-Based** (`/Root/Element[1]`):\n    - Selects first element only\n    - Use when only certain occurrences should be processed\n\n2. **Content-Based** (`//Element[contains(text(),\"Value\")]`):\n    - Selects elements containing specific text\n    - Useful for filtering based on content\n\n3. **Parent-Relative** (`//Parent[Child=\"Value\"]/Element`):\n    - Selects elements with specific sibling or parent conditions\n    - Powerful for complex structural conditions\n\n**Namespace handling**\n\nWhen working with namespaced XML:\n\n```xml\n<soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"\n              xmlns:ns=\"http://example.com/api\">\n  <soap:Body>\n    <ns:Response>\n      <ns:Customer>Record 1</ns:Customer>\n      <ns:Customer>Record 2</ns:Customer>\n    </ns:Response>\n  </soap:Body>\n</soap:Envelope>\n```\n\nThe system automatically handles namespaces, but for clarity and precision:\n\n1. **Namespace-Aware Path**:\n    - XPath: `/soap:Envelope/soap:Body/ns:Response/ns:Customer`\n    - Include namespace prefixes as they appear in the document\n\n2. **Namespace-Agnostic Path**:\n    - XPath: `//Customer` or `//*[local-name()=\"Customer\"]`\n    - Use when you want to ignore namespaces entirely\n\n**Determination strategy for ai agents**\n\n1. **Identify Record Elements**:\n    - Look for repeating elements that represent individual \"rows\" of data\n    - These elements typically have the same name and similar structure\n    - They often contain multiple child elements representing \"fields\"\n\n2. **Analyze Element Hierarchy**:\n    - Note the path from root to record elements\n    - Determine if records appear at consistent locations or vary\n    - Check if they need to be filtered by attributes or position\n\n3. **Test Path Specificity**:\n    - More specific paths reduce processing overhead but are less flexible\n    - More general paths (with `//`) are robust to structure changes but less efficient\n    - Balance specificity with flexibility based on source stability\n\n**Common xpath patterns by source**\n\n| Source Type | Common XPath Pattern | Example |\n|-------------|----------------------|---------|\n| SOAP APIs | `/Envelope/Body/*/Response/*` | `/soap:Envelope/soap:Body/ns:GetOrdersResponse/ns:Order` |\n| REST XML | `/Response/Results/*` | `/ApiResponse/Results/Customer` |\n| Feeds | `/Feed/Entry` or `/Feed/Item` | `/rss/channel/item` |\n| Documents | `//Section/Item` | `//Chapter/Paragraph` |\n| EDI/Business | `/Document/Transaction/Line` | `/Invoice/LineItems/Item` |\n\n**Troubleshooting indicators**\n\nIf you encounter these issues, review the resourcePath:\n\n- Export completes successfully but processes 0 records\n- Records contain unexpected or partial data\n- Only first level of data is extracted (missing nested content)\n- Namespace-related \"element not found\" errors\n\n**Implementation notes**\n\n- XPath is case-sensitive; element and attribute names must match exactly\n- Each matching element becomes a separate record for processing\n- Child elements become fields in the processed record\n- Attributes can be included in field data if needed\n- Namespaces are handled automatically but may require explicit prefixes\n- Testing with an XPath tool on sample data is highly recommended\n"}}},"fileDefinition":{"type":"object","description":"Configuration settings for parsing files using a predefined file definition. This object enables processing of complex, non-standard, or proprietary file formats that require specialized parsing logic beyond what the standard parsers (CSV, JSON, XML, etc.) can handle.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"filedefinition\". This approach is required for properly handling:\n- Legacy or proprietary file formats with complex structures\n- Fixed-width text files where field positions are defined by character positions\n- Electronic Data Interchange (EDI) documents (X12, EDIFACT, etc.)\n- Multi-record type files where different lines have different formats\n- Files requiring complex preprocessing or custom parsing logic\n\n**File definition characteristics**\n\n- **Custom Parsing Rules**: Applies predefined parsing logic to complex file formats\n- **Reusable Configurations**: References externally defined parsing rules that can be reused\n- **Complex Format Support**: Handles formats that standard parsers cannot process\n- **Specialized Processing**: Often used for industry-specific or legacy formats\n\n**Implementation strategy for ai agents**\n\n1. **Format Analysis**:\n    - Determine if the file format is standard (CSV, JSON, XML) or requires custom parsing\n    - Check if the format follows industry standards like EDI, SWIFT, or fixed-width\n    - Assess if there are multiple record types within the same file\n    - Identify if specialized logic is needed to interpret the file structure\n\n2. **File Definition Selection**:\n    - Verify that a suitable file definition has already been created in the system\n    - Check if existing file definitions match the format requirements\n    - Confirm the file definition ID from system administrators if needed\n    - Ensure the file definition is compatible with the export's needs\n\n**Use case scenarios**\n\n**Fixed-Width Files**\n\nFiles where each field has a specific starting position and length:\n```\nCUST00001JOHN     DOE       123 MAIN ST\nCUST00002JANE     SMITH     456 OAK AVE\n```\n- Fields are positioned by character count rather than delimiters\n- Requires precise position and length definitions\n- Common in legacy mainframe and banking systems\n\n**Edi Documents**\n\nElectronic Data Interchange formats for business transactions:\n```\nISA*00*          *00*          *ZZ*SENDER         *ZZ*RECEIVER       *...\nGS*PO*SENDER*RECEIVER*20210101*1200*1*X*004010\nST*850*0001\nBEG*00*SA*123456**20210101\n...\n```\n- Highly structured with segment identifiers and element separators\n- Contains multiple record types with different structures\n- Requires complex parsing rules and validation\n\n**Multi-Record Files**\n\nFiles containing different record types identified by indicators:\n```\nH|SHIPMENT|20210115|PRIORITY\nD|ITEM001|5|WIDGET|RED\nD|ITEM002|10|GADGET|BLUE\nT|2|15|COMPLETE\n```\n- Each line starts with a record type indicator\n- Different record types have different field structures\n- Requires conditional processing based on record type\n\n**Error prevention**\n\n- **Definition Mismatch**: Ensure the file definition matches the actual file format\n- **Missing Definition**: Verify the file definition exists before referencing it\n- **Access Issues**: Confirm the integration has permission to use the file definition\n- **Version Compatibility**: Check if file definition version matches current file format\n\n**Optimization opportunities**\n\n- Document which file definition is used and why it's appropriate for the file format\n- Consider creating purpose-specific file definitions for complex formats\n- Test file definitions with sample files before deploying in production\n- Maintain documentation of the file structure alongside the file definition reference\n","properties":{"_fileDefinitionId":{"type":"string","format":"objectId","description":"The unique identifier of the file definition to use for parsing the file. This ID references a preconfigured file definition resource that contains the detailed parsing instructions for a specific file format.\n\n**Field behavior**\n\n- **Purpose**: References an existing file definition resource in the system\n- **Format**: MongoDB ObjectId (24-character hexadecimal string)\n- **Requirement**: MANDATORY when type=\"filedefinition\"\n- **Validation**: Must reference a valid, accessible file definition\n\n**Understanding file definitions**\n\nA file definition is a separate resource that defines:\n\n1. **Record Structure**:\n    - Field names, positions, and data types\n    - Record identifiers and format specifications\n    - Parsing rules and field extraction logic\n\n2. **Processing Rules**:\n    - How to identify different record types\n    - How to handle headers, footers, and details\n    - Data validation and transformation rules\n\n3. **Format-Specific Settings**:\n    - For fixed-width: Character positions and field lengths\n    - For EDI: Segment identifiers and element separators\n    - For proprietary formats: Custom parsing instructions\n\n**Obtaining the correct id**\n\nTo identify the appropriate file definition ID:\n\n1. **System Administration**:\n    - Check with system administrators for a list of available file definitions\n    - Request the specific ID for the file format you need to process\n    - Verify the file definition's compatibility with your file format\n\n2. **File Definition Catalog**:\n    - If available, consult the file definition catalog in the system\n    - Search for definitions matching your file format requirements\n    - Note the ObjectId of the appropriate definition\n\n3. **Custom Definition Creation**:\n    - If no suitable definition exists, request creation of a new one\n    - Provide sample files and format specifications\n    - Obtain the new file definition's ID after creation\n\n**Implementation guidance for ai agents**\n\n**Recommendation Framework**\n\nWhen implementing a file definition-based export:\n\n1. **Verify Definition Existence**:\n    - Confirm the file definition exists before configuration\n    - Do not guess or generate random IDs\n    - Request specific ID from system administrators\n\n2. **Documentation Requirements**:\n    - Document which file definition is being used and why\n    - Note any specific requirements or limitations of the definition\n    - Record the mapping between file fields and integration needs\n\n3. **Testing Approach**:\n    - Recommend testing with sample files before production use\n    - Verify all required fields are correctly extracted\n    - Validate the parsing results meet integration requirements\n\n**Common File Definition Categories**\n\n| Category | Description | Example Formats |\n|----------|-------------|----------------|\n| Fixed-Width | Fields defined by character positions | Banking transactions, government reports |\n| EDI | Electronic Data Interchange standards | X12, EDIFACT, TRADACOMS |\n| Hierarchical | Complex parent-child structures | Specialized industry formats |\n| Multi-Record | Different record types in one file | Inventory systems, financial exports |\n| Proprietary | Custom or legacy system formats | Mainframe exports, specialized software |\n\n**Technical considerations**\n\n- File definitions are reusable across multiple exports\n- Changes to a file definition affect all exports using it\n- File definitions may have version dependencies\n- Some file definitions may require specific pre-processing settings\n- Performance impact varies based on definition complexity\n\n**Troubleshooting indicators**\n\nIf you encounter these issues, verify the file definition ID:\n\n- \"File definition not found\" errors\n- Unexpected field mapping or missing fields\n- Data type conversion errors\n- Parsing failures with specific record types\n\nAlways document the exact file definition ID with its purpose to facilitate troubleshooting and maintenance.\n"}}},"filter":{"allOf":[{"description":"Configuration for selectively processing files based on specified criteria. This object enables precise\ncontrol over which files are included or excluded from the export operation.\n\n**Filter behavior**\n\nWhen configured, the filter is applied before file processing begins:\n- Files that match the filter criteria are processed\n- Files that don't match are completely skipped\n- No partial file processing is performed\n\n**Available filter fields**\n\nThe specific fields available for file filtering are the contained in the `fileMeta` property.\n\n**Common Filter Fields**\n\nThese are the most commonly available fields across most file providers:\n\n1. **filename**: The name of the file (with extension)\n  - Example filter: Match files with specific extensions or naming patterns\n  - Usage: `[\"endswith\", [\"extract\", \"filename\"], \".csv\"]`\n\n2. **filesize**: The size of the file in bytes\n  - Example filter: Skip files that are too large or too small\n  - Usage: `[\"lessthan\", [\"number\", [\"extract\", \"filesize\"]], 1000000]`\n\n3. **lastmodified**: The last modification timestamp of the file\n  - Example filter: Process only files created/modified within a specific date range\n  - Usage: `[\"greaterthan\", [\"extract\", \"lastmodified\"], \"2023-01-01T00:00:00Z\"]`\n"},{"$ref":"#/components/schemas/Filter"}]},"backupPath":{"type":"string","description":"The file system path where backup files will be stored before processing. This path specifies a directory location where the system will create backup copies of files before they are processed by the export flow.\n\n**Backup mechanism overview**\n\nThe backup mechanism creates a copy of source files in the specified location before processing begins. This provides:\n\n- **Data Safety**: Preserves original files in case of processing errors\n- **Audit Trail**: Maintains historical record of exported data\n- **Recovery Option**: Enables reprocessing from original files if needed\n- **Compliance Support**: Helps meet data retention requirements\n\n**Path configuration guidelines**\n\nThe path format must follow these conventions:\n\n- **Absolute Paths**: Must start with \"/\" (Unix/Linux) or include drive letter (Windows)\n- **Relative Paths**: Interpreted relative to the application's working directory\n- **Network Paths**: Can use UNC format (\\\\server\\share\\path) or mounted network drives\n- **Access Requirements**: The path must be writable by the service account running the integration\n\n**Implementation strategy for ai agents**\n\nWhen configuring the backup path, consider these factors:\n\n1. **Storage Capacity Planning**:\n    - Estimate average file sizes and volumes\n    - Calculate required storage based on retention period\n    - Implement monitoring for storage utilization\n    - Plan for storage growth based on business projections\n\n2. **Path Selection Criteria**:\n    - Choose locations with sufficient disk space\n    - Ensure appropriate read/write permissions\n    - Select paths with reliable access (avoid temporary or volatile storage)\n    - Consider network latency for remote locations\n\n3. **Backup Naming Convention**:\n    - Default: Original filename with timestamp suffix\n    - Custom: Can be controlled through integration settings\n    - Avoid paths that may contain special characters that need escaping\n    - Consider filename length limitations of target filesystem\n\n4. **Security Considerations**:\n    - Restrict access to backup location to authorized personnel only\n    - Avoid public-facing directories\n    - Consider encryption for sensitive data backups\n    - Implement appropriate file permissions\n\n**Backup strategy recommendations**\n\n| Data Sensitivity | Recommended Approach | Path Considerations |\n|------------------|----------------------|---------------------|\n| Low | Local directory backup | Fast access, limited protection |\n| Medium | Network share with permissions | Balanced access/protection |\n| High | Secure storage with encryption | Highest protection, potential performance impact |\n| Regulated | Compliant storage with audit trail | Must meet specific regulatory requirements |\n\n**Integration patterns**\n\n**Temporary Processing Pattern**\n\nFor short-term processing needs:\n```\n/tmp/exports/backups\n```\n- Files stored temporarily during processing\n- Limited retention period\n- Optimized for processing speed\n- May be automatically cleaned up\n\n**Long-term Archival Pattern**\n\nFor regulatory or business retention requirements:\n```\n/archive/exports/2023/Q4\n```\n- Organized by time period\n- Structured for easy retrieval\n- May include additional metadata\n- Designed for long-term storage\n\n**Cloud Storage Pattern**\n\nFor scalable, managed storage:\n```\n/mnt/cloud/exports/client123\n```\n- Mounted cloud storage location\n- Potentially unlimited capacity\n- May include built-in versioning\n- Often includes automatic replication\n\n**Error handling guidance**\n\nWhen configuring backup paths, anticipate these common issues:\n\n- **Permission Denied**: Ensure service account has write access\n- **Path Not Found**: Verify directory exists or create it programmatically\n- **Disk Full**: Monitor storage capacity and implement alerts\n- **Path Too Long**: Be aware of filesystem path length limitations\n\n**Technical considerations**\n\n- Backup operations may impact performance for large files\n- Network paths may introduce latency and availability concerns\n- Some filesystems have case sensitivity differences (important for path matching)\n- Path separators vary by platform (/ vs \\)\n- Special characters in paths may require escaping in certain contexts\n- Consider implementing automatic cleanup policies for backups\n\n**System administration notes**\n\n- Backup paths should be included in system backup procedures\n- Monitor space utilization on backup volumes\n- Implement appropriate retention policies\n- Document backup path locations in system configuration\n- Consider periodic validation of backup file integrity\n"}}},"Filter":{"type":"object","description":"Configuration for selectively processing records based on specified criteria. This object enables\nprecise control over which items are included or excluded from processing operations.\n\n**Filter behavior**\n\nWhen configured, the filter is applied before processing begins:\n- Items that match the filter criteria are processed\n- Items that don't match are completely skipped\n- No partial processing is performed\n\n**Implementation approaches**\n\nThere are two distinct filtering mechanisms available:\n\n**Rule-Based Filtering (`type: \"expression\"`)**\n- **Best For**: Common filtering patterns based on standard attributes\n- **Capabilities**: Filter by names, values, dates, numerical ranges, text patterns\n- **Advantages**: Declarative, no coding required, consistent performance\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear, static criteria for selection\n\n**Script-Based Filtering (`type: \"script\"`)**\n- **Best For**: Complex logic, dynamic criteria, or business rules\n- **Capabilities**: Full programmatic control, access to complete metadata\n- **Advantages**: Maximum flexibility, can implement any filtering logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Simple rules aren't sufficient or logic needs to be dynamic\n","properties":{"type":{"type":"string","description":"Determines which filtering mechanism to use. This choice affects which properties\nmust be configured and how filtering logic is implemented.\n\n**Available types**\n\n**Rule-Based Filtering (`\"expression\"`)**\n- **Required Config**: The `expression` object with rule definitions\n- **Behavior**: Evaluates declarative rules against item attributes\n- **Best For**: Common patterns like name matching, date ranges, value limits\n- **Advantages**: Simpler to configure, no custom code required\n\n**Script-Based Filtering (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to determine which items to process\n- **Best For**: Complex conditions, business logic, dynamic criteria\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard filtering needs (name, size, date), use `\"expression\"`\n2. For complex logic or conditions not covered by expressions, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based filtering. This object enables filtering\nitems based on common attributes without requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define filtering rules that\ncan match against item attributes like name, type, value, date, and other properties.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Rules can be combined with AND/OR logic\n- Each rule can check a specific attribute\n- Multiple conditions can be applied (ranges, pattern matching, exact matches)\n\n**Common filter patterns**\n\n1. **Pattern matching**: Using wildcards like `*` and `?`\n2. **Value range filtering**: Numbers between min and max values\n3. **Date range filtering**: Items created/modified within specific time ranges\n4. **Status checking**: Items with specific status values or properties\n\nFor AI agents: Rule-based filtering should be your first choice when the filtering criteria\ncan be expressed in terms of standard attributes. Only use script-based filtering when\nmore complex logic is required.\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"1\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"1\"\nfor current implementations.\n","enum":["1"]},"rules":{"type":"array","description":"Expression array defining filter conditions using prefix notation. The first element is the operator,\nfollowed by its operands which may themselves be nested expression arrays.\n\nThe rule expression follows this pattern:\n- First element: Operator name (string)\n- Remaining elements: Operands for that operator (values or nested expressions)\n\n**Expression structure**\n\nFilter expressions use a prefix notation where operators appear before their operands:\n```\n[operator, operand1, operand2, ...]\n```\n\n**Comparison Operators**\n- `\"equals\"`: Exact match (equals)\n- `\"notequals\"`: Not equal to value (not equals)\n- `\"greaterthan\"`: Value is greater than specified value (is greater than)\n- `\"greaterthanequals\"`: Value is greater than or equal to specified value (is greater than or equals)\n- `\"lessthan\"`: Value is less than specified value (is less than)\n- `\"lessthanequals\"`: Value is less than or equal to specified value (is less than or equals)\n- `\"startswith\"`: String starts with specified prefix (starts with)\n- `\"endswith\"`: String ends with specified suffix (ends with)\n- `\"contains\"`: String contains specified substring (contains)\n- `\"doesnotcontain\"`: String does not contain specified substring (does not contain)\n- `\"isempty\"`: Field is empty or null (is empty)\n- `\"isnotempty\"`: Field contains a value (is not empty)\n- `\"matches\"`: Matches specified pattern (matches)\n\n**Logical Operators**\n- `\"and\"`: All conditions must be true\n- `\"or\"`: At least one condition must be true\n- `\"not\"`: Negates the condition\n\n**Field Access and Type Conversion**\n- `\"extract\"`: Access a field from the item by name\n- `\"settings\"`: Access a custom setting from the flow, flow step, or integration configuration\n- `\"boolean\"`: Convert value to Boolean type\n- `\"epochtime\"`: Convert value to Epoch Time (Unix timestamp)\n- `\"number\"`: Convert value to Number type\n- `\"string\"`: Convert value to String type\n\n**Field Access Details**\n\n**Using `extract` to access record fields:**\n- Retrieves values from the current record being processed\n- Can access nested properties using dot notation (e.g., `\"customer.email\"`)\n- Returns the raw field value which may need type conversion\n\n**Using `settings` to access configuration values:**\n- Retrieves values from the integration's configuration settings\n- Supports different scopes with prefix notation:\n  - `flow.settingName`: Access flow-level settings\n  - `export.settingName`: Access export-level settings\n  - `import.settingName`: Access import-level settings\n  - `integration.settingName`: Access integration-level settings\n- Useful for dynamic filtering based on configuration\n\n**Field Transformations**\n- `\"lowercase\"`: Convert string to lowercase\n- `\"uppercase\"`: Convert string to uppercase\n- `\"ceiling\"`: Round number up to the nearest integer\n- `\"floor\"`: Round number down to the nearest integer\n- `\"abs\"`: Get absolute value of a number\n\nType conversion operators are often necessary when comparing extracted field values against literals or when the field type doesn't match the comparison operator's expected type. For example:\n\n```json\n[\n  \"equals\",\n  [\n    \"number\",  // Convert to number before comparison\n    [\n      \"extract\",\n      \"quantity\"\n    ]\n  ],\n  100\n]\n```\n\nExample with datetime conversion:\n```json\n[\n  \"greaterthan\",\n  [\n    \"epochtime\",  // Convert to Unix timestamp before comparison\n    [\n      \"extract\",\n      \"createdDate\"\n    ]\n  ],\n  1609459200000  // January 1, 2021 as Unix timestamp in milliseconds\n]\n```\n\nExample with transformations:\n```json\n[\n  \"and\",\n  [\n    \"matches\",\n    [\n      \"lowercase\",  // Convert to lowercase before matching\n      [\n        \"string\",\n        [\n          \"extract\",\n          \"categories\"\n        ]\n      ]\n    ],\n    \"netsuite\"\n  ],\n  [\n    \"notequals\",\n    [\n      \"string\",\n      [\n        \"extract\",\n        \"recurrence.pattern.type\"\n      ]\n    ],\n    \"\"\n  ]\n]\n```\n\nExample comparing a record field with a flow setting:\n```json\n[\n  \"equals\",\n  [\n    \"string\",\n    [\n      \"extract\",\n      \"trantype\"\n    ]\n  ],\n  [\n    \"string\",\n    [\n      \"settings\",\n      \"flow.trantype\"\n    ]\n  ]\n]\n```\n\n**Examples**\n\nExample 1: Status field is not equal to \"cancelled\"\n```json\n[\n  \"notequals\",\n  [\n    \"extract\",\n    \"status\"\n  ],\n  \"cancelled\"\n]\n```\n\nExample 2: Filename starts with \"HC\"\n```json\n[\n  \"startswith\",\n  [\n    \"extract\",\n    \"filename\"\n  ],\n  \"HC\"\n]\n```\n\nExample 3: Amount is greater than 100\n```json\n[\n  \"greaterthan\",\n  [\n    \"number\",\n    [\n      \"extract\",\n      \"amount\"\n    ]\n  ],\n  100\n]\n```\n\nExample 4: Order date is after January 1, 2023\n```json\n[\n  \"greaterthan\",\n  [\n    \"extract\",\n    \"orderDate\"\n  ],\n  \"2023-01-01T00:00:00Z\"\n]\n```\n\nExample 5: Category contains any of [\"Urgent\", \"High Priority\"]\n```json\n[\n  \"anyof\",\n  [\n    \"extract\",\n    \"category\"\n  ],\n  [\"Urgent\", \"High Priority\"]\n]\n```\n","items":{"oneOf":[{"title":"String","type":"string"},{"title":"Number","type":"number"},{"title":"Boolean","type":"boolean"},{"title":"Object","type":"object"},{"title":"Array","type":"array"}]}}}},"script":{"type":"object","description":"Configuration for programmable script-based filtering. This object enables complex, custom\nfiltering logic beyond what expression-based filtering can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to determine which items\nshould be processed.\n\n**Implementation approach**\n\nScript-based filtering works by:\n1. Executing the specified function from the referenced script\n2. Passing item data to the function\n3. Using the function's return value (true/false) to determine inclusion\n\n**Common use cases**\n\nScript filtering is ideal for:\n- Complex business logic that can't be expressed as simple rules\n- Dynamic filtering criteria that change based on external factors\n- Content-based filtering that requires deep inspection\n- Advanced pattern matching beyond simple wildcards\n- Multi-stage filtering with intermediate logic\n\nFor AI agents: Only use script-based filtering when expression-based filtering is insufficient.\nScript filtering requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to the Script resource that contains the filtering logic. This must be a valid\nObjectId of a Script resource that exists in the system.\n\nThe referenced script must contain the function specified in the `function` field\nand must be written to handle filtering specifically. The script receives\nitem data as its input and must return a boolean value indicating whether\nto process the item (true) or skip it (false).\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n"},"function":{"type":"string","description":"Name of the function within the script to execute for filtering decisions. This function\nmust exist in the script referenced by _scriptId.\n\n**Function requirements**\n\nThe specified function must:\n- Accept item data as its first parameter\n- Return a boolean value (true to process the item, false to skip it)\n- Handle errors gracefully\n- Execute efficiently (as it may run for many items)\n\n**Function signature**\n\n```javascript\nfunction filterItems(itemData) {\n  // itemData contains properties of the item being evaluated\n  // Custom logic here\n  return true; // or false to skip the item\n}\n```\n\nFor AI agents: Ensure the function name exactly matches a function defined in the\nreferenced script, as mismatches will cause the filter to fail.\n"}}}}},"Http":{"type":"object","description":"Configuration for HTTP exports.\n\nIMPORTANT: When the _connectionId field points to a connection where the type is http, \nthis object MUST be populated for the export to function properly. This is a required configuration\nfor all HTTP based exports, as determined by the connection associated with the export.\n","properties":{"type":{"type":"string","enum":["file"],"description":"**Important:** This field should be LEFT UNDEFINED for the vast majority of HTTP exports.\n\nThis is an OPTIONAL field that should only be set in rare, specific cases. For standard REST API exports\n(Shopify, Salesforce, NetSuite, custom REST APIs, etc.), this field MUST be left undefined.\n\n**When to leave this field undefined (MOST common CASE)**\n\nLeave this field undefined for ALL standard data exports, including:\n- REST API exports that return JSON records\n- APIs that return XML records or structured data\n- Any export that retrieves business records, entities, or data objects\n- Standard CRUD operations that return record collections\n- GraphQL queries that return structured data\n- SOAP APIs that return structured responses\n\nExamples of exports that should have this field undefined:\n- \"Export all Shopify Customers\" → undefined (returns JSON customer records)\n- \"Retrieve orders from custom REST API\" → undefined (returns JSON order records)\n\n**When to set this field to 'file' (RARE use CASE)**\n\nSet this field to 'file' ONLY when the HTTP endpoint is specifically designed to download files:\n- The endpoint returns raw binary file content (PDFs, images, ZIP files, etc.)\n- The endpoint is a file download service (e.g., downloading invoices, reports, attachments)\n- The response body contains file data that needs to be saved as a file, not parsed as records\n- You need to download and process files from a remote server\n\nExamples of when to set type: \"file\":\n- \"Download PDF invoices from the API\" → type: \"file\"\n- \"Retrieve image files from a file server\" → type: \"file\"\n- \"Download CSV files from an FTP server via HTTP\" → type: \"file\"\n\n**Implementation details**\n\nWhen this field is set to 'file':\n- The 'file' object property MUST also be configured\n- The export appears as a \"Transfer\" step in the Flow Builder UI\n- The system applies file-specific processing to the HTTP response\n- Downstream steps receive file content rather than record data\n\nWhen this field is undefined (default for most exports):\n- The export appears as a standard \"Export\" step in the Flow Builder UI\n- The system parses the HTTP response as structured data (JSON, XML, etc.)\n- Downstream steps receive record data that can be mapped and transformed\n\n**Decision flowchart**\n\n1. Does the API endpoint return business records/entities (customers, orders, products, etc.)?\n   → YES: Leave this field undefined\n2. Does the API endpoint return structured data (JSON objects, XML records)?\n   → YES: Leave this field undefined\n3. Does the API endpoint return raw file content (PDFs, images, binary data)?\n   → YES: Set this field to \"file\" (and configure the 'file' property)\n\nRemember: When in doubt, leave this field undefined. Most HTTP exports are standard data exports.\n"},"method":{"type":"string","description":"HTTP method used for the export request to retrieve data from the target API.\n\n- GET: Most commonly used for data retrieval operations (default)\n- POST: Used when request body criteria are needed, especially for RPC or SOAP/XML APIs\n- PUT: Available for specific APIs that support it for data retrieval\n- PATCH/DELETE: Less common for exports but available for specialized use cases\n\nConsult your target API's documentation to determine the appropriate method.\n","enum":["GET","POST","PUT","PATCH","DELETE"]},"relativeURI":{"type":"string","description":"The resource path portion of the API endpoint used for this export.\n\nThis value is combined with the baseURI defined in the associated connection to form the complete API endpoint URL. \n\nThe entire relativeURI can be defined using handlebars expressions to create dynamic paths:\n\nExamples:\n- Simple resource paths: \"/products\", \"/orders\", \"/customers\"\n- With query parameters: \"/orders?status=pending\", \"/products?category=electronics&limit=100\"\n- With path parameters: \"/customers/{{record.customerId}}/orders\", \"/accounts/{{record.accountId}}/transactions\"\n- With dynamic query values: \"/orders?since={{lastExportDateTime}}\"\n- Fully dynamic path: \"{{record.dynamicPath}}\"\n\nPath parameters, query parameters, or the entire URI can be dynamically generated using handlebars syntax. This is particularly useful for parameterized API calls or when the endpoint needs to be determined at runtime based on data or context.\n\n**Lookup export behavior with mappings**\n\n**CRITICAL**: For lookup exports (isLookup: true) that have mappings configured, the handlebars template evaluation for relativeURI always uses the **original input record** before any mapping transformations are applied.\n\nThis design ensures that:\n- Mappings can transform the record structure for the request body without affecting URI construction\n- Essential fields like record IDs remain accessible for building dynamic endpoints\n- The request body can be optimized for the target API while preserving URI parameters\n\n**Example Scenario:**\n```\nInput record: {\"customerId\": \"12345\", \"name\": \"John Doe\", \"email\": \"john@example.com\"}\nMappings: Transform to {\"customer_name\": \"John Doe\", \"contact_email\": \"john@example.com\"}\nrelativeURI: \"/customers/{{record.customerId}}/details\"\nResult: \"/customers/12345/details\" (uses original customerId, not mapped version)\n```\n\nThis prevents situations where mapping transformations would remove or rename fields needed for endpoint construction, ensuring reliable API calls regardless of how the request body is structured.\n"},"headers":{"type":"array","description":"Export-specific HTTP headers to include with API requests. Note that common headers like authentication are typically defined on the connection record rather than here.\n\nUse this field only for headers that are specific to this particular export operation. Headers defined here will be merged with (and can override) headers from the connection.\n\nExamples of export-specific headers:\n- Accept: To request specific content format for this export only\n- X-Custom-Filter: Export-specific filtering parameters\n                \nHeader values can be defined using handlebars expressions if you need to reference any dynamic data or configurations.\n\nFor lookup exports (isLookup: true) with mappings configured, header value templates render against the **pre-mapped** record (the original input record from the upstream flow step) — mappings do not rewrite header evaluation.\n","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}}},"requestMediaType":{"type":"string","description":"Override request media type. Use this field to handle the use case where the HTTP request requires a different media type than what is configured on the connection.\n\nMost APIs use a consistent media type across all endpoints, which should be configured at the connection resource. Use this field only when:\n\n- This specific endpoint requires a different format than other endpoints in the API\n- You need to override the connection-level setting for this particular export only\n\nCommon values:\n- \"json\": For JSON request bodies (Content-Type: application/json)\n- \"xml\": For XML request bodies (Content-Type: application/xml)\n- \"urlencoded\": For URL-encoded form data (Content-Type: application/x-www-form-urlencoded)\n- \"form-data\": For multipart form data, typically used for file uploads\n- \"plaintext\": For plain text content\n","enum":["json","xml","urlencoded","form-data","plaintext"]},"body":{"type":"string","description":"The HTTP request body to send with POST, PUT, or PATCH requests. This field is typically used to:\n\n1. Send query parameters to APIs that require them in the request body (e.g., GraphQL or SOAP APIs)\n2. Provide filtering criteria for data exports\n\nThe body content must match the format specified in the requestMediaType field (JSON, XML, etc.).\n\nYou can use handlebars expressions to create dynamic content:\n```\n{\n  \"query\": \"SELECT Id, Name FROM Account WHERE LastModifiedDate > {{lastExportDateTime}}\",\n  \"parameters\": {\n    \"customerId\": \"{{record.customerId}}\",\n    \"limit\": 100\n  }\n}\n```\n\nFor XML or SOAP requests:\n```\n<request>\n  <filter>\n    <updatedSince>{{lastExportDateTime}}</updatedSince>\n    <type>{{record.type}}</type>\n  </filter>\n</request>\n```\n"},"successMediaType":{"type":"string","description":"Specifies the media type (content type) expected in successful responses for this specific export. This field should only be used when:\n\n1. The response format differs from the request format\n\nMost APIs return responses in the same format as the request, so this field is often unnecessary.\n\nCommon values:\n- \"json\": For JSON responses (typically with Content-Type: application/json)\n- \"xml\": For XML responses (typically with Content-Type: application/xml)\n- \"csv\": For CSV data (typically with Content-Type: text/csv)\n- \"plaintext\": For plain text responses\n","enum":["json","xml","csv","plaintext"]},"errorMediaType":{"type":"string","description":"Specifies the media type (content type) expected in error responses for this specific export. This field should only be used when:\n\n1. Error response format differs from the request format\n\nMost APIs return responses in the same format as the request, so this field is often unnecessary.\n\nCommon values:\n- \"json\": For JSON error responses (most common in modern APIs)\n- \"xml\": For XML error responses (common in SOAP and older REST APIs)\n- \"plaintext\": For plain text error messages\n","enum":["json","xml","plaintext"]},"_asyncHelperId":{"type":"string","format":"objectId","description":"Reference to an AsyncHelper resource that handles polling for long-running API operations.\n\nAsync helpers bridge Celigo's synchronous flow engine with asynchronous external APIs that use a \"fire-and-check-back\" pattern (HTTP 202 responses, job tickets, feed/document IDs, etc.).\n\nUse this field when the export needs to:\n- Submit a request to an API that processes data asynchronously\n- Poll for status at configured intervals\n- Retrieve results once the external process completes\n\nCommon use cases include:\n- Amazon SP-API feeds\n- Large report generators\n- File conversion services\n- Image processors\n- Any API that needs minutes or hours to complete a requested operation\n"},"once":{"type":"object","description":"HTTP configuration specific to Once exports. Used to mark records as exported after successful processing.","properties":{"relativeURI":{"type":"string","description":"The relative URI used to mark records as exported. Called as a callback to the source system after successful processing.\n\n- Must be a relative path starting with \"/\"\n- Can include Handlebars variables: \"/orders/{{record.Id}}/exported\"\n- Common patterns: dedicated status endpoint or record-specific updates\n- Renders against the **pre-mapped** record (original extracted record); mappings do not apply to the callback URI.\n"},"method":{"type":"string","description":"The HTTP method used when calling back to mark records as exported.","enum":["GET","PUT","POST","PATCH","DELETE"]},"body":{"type":"string","description":"The HTTP request body used when calling back to mark records as exported. Can include Handlebars expressions for dynamic values."}}},"paging":{"type":"object","description":"Configuration object for navigating through multi-page API responses.\n\n**Overview for ai agents**\n\nThis object is critical for retrieving large datasets that cannot be returned in a single API response.\nThe pagination implementation determines how the system will retrieve subsequent pages of data after\nthe first request, enabling complete data collection regardless of volume.\n\n**Key decision points**\n\n1. **Identify the API's pagination mechanism** (check API documentation)\n2. **Select the corresponding method** value (most important field)\n3. **Configure the required fields** based on your selected method\n4. **Add pagination variables** to your request configuration\n5. **Consider last page detection** options if needed\n\n**Field dependencies by pagination method**\n\n1. **page**: Page number-based pagination (e.g., ?page=2)\n    - Required: Set `method` to \"page\"\n    - Optional: `page` - Set if first page index is not 0 (e.g., set to 1 for APIs that start at page 1)\n    - Optional: `maxPagePath` - Path to find total pages in response\n    - Optional: `maxCountPath` - Path to find total records in response\n    - Optional: `relativeURI` - Only if subsequent page URLs differ from first page\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n2. **skip**: Offset/limit pagination (e.g., ?offset=100&limit=50)\n    - Required: Set `method` to \"skip\"\n    - Optional: `skip` - Set if first skip index is not 0\n    - Optional: `maxPagePath` - Path to find total pages in response\n    - Optional: `maxCountPath` - Path to find total records in response\n    - Optional: `relativeURI` - Only if subsequent page URLs differ from first page\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n3. **token**: Token-based pagination (e.g., ?page_token=abc123)\n    - Required: Set `method` to \"token\"\n    - Required: `path` - Location of the token in the response\n    - Required: `pathLocation` - Whether token is in \"body\" or \"header\"\n    - Optional: `token` - Set to provide initial token (rare)\n    - Optional: `pathAfterFirstRequest` - Only if token location changes after first page\n    - Optional: `relativeURI` - Only if subsequent page URLs differ from first page\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n4. **linkheader**: Link header pagination (uses HTTP Link header with rel values)\n    - Required: Set `method` to \"linkheader\"\n    - Optional: `linkHeaderRelation` - Set if relation is not the default \"next\"\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n5. **nextpageurl**: Complete next URL in response\n    - Required: Set `method` to \"nextpageurl\"\n    - Required: `path` - Location of the next URL in the response\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n6. **relativeuri**: Custom relative URI pagination\n    - Required: Set `method` to \"relativeuri\"\n    - Required: `relativeURI` - Configure using handlebars with previous_page context\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n7. **body**: Custom request body pagination\n    - Required: Set `method` to \"body\"\n    - Required: `body` - Configure using handlebars with previous_page context\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n**Pagination variables**\n\nBased on your selected method, you MUST add one of these variables to your request configuration:\n\n- For page-based: Add `{{export.http.paging.page}}` to the URI or body\n- For offset-based: Add `{{export.http.paging.skip}}` to the URI or body\n- For token-based: Add `{{export.http.paging.token}}` to the URI or body\n\n**Last page detection options**\n\nThese fields can be used with any pagination method to detect the last page:\n\n- `lastPageStatusCode` - Detect last page by HTTP status code\n- `lastPagePath` - JSON path to check for last page indicator\n- `lastPageValues` - Values at lastPagePath that indicate last page\n\n**Common implementation patterns**\n\nMost APIs require only 2-3 fields to be configured. The most common patterns are:\n\n```json\n// Page-based pagination (starting at page 1)\n{\n  \"method\": \"page\",\n  \"page\": 1\n}\n\n// Token-based pagination\n{\n  \"method\": \"token\",\n  \"path\": \"meta.nextToken\",\n  \"pathLocation\": \"body\"\n}\n\n// Link header pagination (simplest to configure)\n{\n  \"method\": \"linkheader\"\n}\n```\n\nIMPORTANT: Incorrect pagination configuration is one of the most common causes of incomplete data retrieval. Take time to properly identify and configure the correct pagination method for your API.\n","properties":{"method":{"type":"string","description":"Defines the pagination strategy that will be used to retrieve all data pages.\n\n**Importance for** AI AGENTS\n\nThis is the MOST CRITICAL field in pagination configuration. It determines:\n- Which other fields are required vs. optional\n- How subsequent pages will be requested\n- Which pagination variables must be used in requests\n- How the system detects the last page\n\n**Pagination methods and their requirements**\n\n**Page-Based** Pagination (`\"page\"`)\n```\n\"method\": \"page\"\n```\n- **Implementation**: Uses increasing page numbers (e.g., ?page=1, ?page=2)\n- **Required Setup**: Add `{{export.http.paging.page}}` to your URI or body\n- **Common Fields**: page (if starting at 1 instead of 0)\n- **API Examples**: Most REST APIs, Shopify, WordPress\n- **When to Use**: APIs that accept a page number parameter\n\n**Offset/Skip Pagination (`\"skip\"`)**\n```\n\"method\": \"skip\"\n```\n- **Implementation**: Uses increasing offset values (e.g., ?offset=0, ?offset=100)\n- **Required Setup**: Add `{{export.http.paging.skip}}` to your URI or body\n- **Common Fields**: Usually none (system handles offset increments)\n- **API Examples**: MongoDB, SQL-based APIs\n- **When to Use**: APIs that use offset/limit or skip/limit parameters\n\n**Token-Based Pagination (`\"token\"`)**\n```\n\"method\": \"token\"\n```\n- **Implementation**: Passes tokens from previous responses to get next pages\n- **Required Setup**: \n    1. Add `{{export.http.paging.token}}` to your URI or body\n    2. Set path to location of token in response\n    3. Set pathLocation to \"body\" or \"header\"\n- **API Examples**: AWS, Google Cloud, modern REST APIs\n- **When to Use**: APIs that provide continuation tokens/cursors\n\n**Link Header Pagination (`\"linkheader\"`)**\n```\n\"method\": \"linkheader\"\n```\n- **Implementation**: Follows URLs in HTTP Link headers automatically\n- **Required Setup**: None (simplest to configure)\n- **Common Fields**: Usually none (automatic)\n- **API Examples**: GitHub, GitLab, any API following RFC 5988\n- **When to Use**: APIs that return Link headers with rel=\"next\"\n\n**Next Page url (`\"nextpageurl\"`)**\n```\n\"method\": \"nextpageurl\"\n```\n- **Implementation**: Uses complete URLs returned in response body\n- **Required Setup**: Set path to location of next URL in response\n- **API Examples**: Some social media APIs, GraphQL implementations\n- **When to Use**: APIs that include complete next page URLs in responses\n\n**Custom relative URI (`\"relativeuri\"`)**\n```\n\"method\": \"relativeuri\"\n```\n- **Implementation**: Builds custom URIs based on previous responses\n- **Required Setup**: Configure relativeURI with handlebars templates\n- **When to Use**: Non-standard pagination requiring custom logic\n\n**Custom Request Body (`\"body\"`)**\n```\n\"method\": \"body\"\n```\n- **Implementation**: Creates custom request bodies for pagination\n- **Required Setup**: Configure body with handlebars templates\n- **API Examples**: GraphQL, SOAP, RPC APIs\n- **When to Use**: APIs requiring POST requests with pagination in body\n\n**Selection guidance**\n\nTo determine the correct method:\n1. Check the API documentation for pagination instructions\n2. Look for examples of multi-page requests in API samples\n3. Test with a small request to observe pagination mechanics\n4. Choose the method matching the API's expected behavior\n\nIMPORTANT: Using the wrong pagination method will result in either errors or incomplete data retrieval.\n","enum":["linkheader","page","skip","token","nextpageurl","relativeuri","body"]},"page":{"type":"integer","description":"Specifies the starting page number for page-based pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"page\"\n- OPTIONAL: Defaults to 0 if not provided\n- COMMON VALUES: 1 (most APIs), 0 (zero-indexed APIs)\n\n**Implementation guidance**\n\nThis field should be set when the API's first page is not zero-indexed. Most APIs use 1 as \ntheir first page number, in which case you should set:\n\n```json\n{\n  \"method\": \"page\",\n  \"page\": 1\n}\n```\n\nThe system will automatically increment this value for each subsequent page request.\n\n**Examples**\n\n- Shopify uses page=1 for first page\n- Some GraphQL APIs use page=0 for first page\n"},"skip":{"type":"integer","description":"Specifies the starting offset value for offset/skip-based pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"skip\"\n- OPTIONAL: Defaults to 0 if not provided\n- COMMON VALUES: 0 (vast majority of APIs)\n\n**Implementation guidance**\n\nThis field rarely needs to be set since most APIs use 0 as the starting offset.\nThe system will automatically increment this value by the pageSize for each subsequent request.\n\nExample calculation for page transitions:\n- First page: offset=0 (or your configured value)\n- Second page: offset=pageSize\n- Third page: offset=pageSize*2\n\n**When to use**\n\nOnly set this if the API requires a non-zero starting offset value, which is very uncommon.\n"},"token":{"type":"string","description":"Specifies an initial token value for token-based pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"token\"\n- OPTIONAL: Leave empty for normal pagination from the beginning\n- ADVANCED USE ONLY: Most implementations should NOT set this\n\n**Implementation guidance**\n\nToken-based pagination normally works by:\n1. Making the first request with no token\n2. Extracting a token from the response (using the path field)\n3. Using that token for the next request\n\nThis field should ONLY be set in rare scenarios:\n- Resuming a previous pagination sequence from a known token\n- APIs that require a token value even for the first request\n- Testing specific pagination scenarios\n\n**Example scenarios**\n\n```json\n// To resume pagination from a specific point:\n{\n  \"method\": \"token\",\n  \"path\": \"meta.nextToken\",\n  \"pathLocation\": \"body\",\n  \"token\": \"eyJwYWdlIjozfQ==\"\n}\n\n// For APIs requiring an initial token:\n{\n  \"method\": \"token\",\n  \"path\": \"pagination.nextToken\",\n  \"pathLocation\": \"body\",\n  \"token\": \"start\"\n}\n```\n"},"path":{"type":"string","description":"Specifies the location of pagination information in API responses.\n\n**Field behavior**\n\nThis field has different requirements based on the pagination method:\n\n- REQUIRED for method=\"token\":\n  Indicates where to find the token for the next page\n\n- REQUIRED for method=\"nextpageurl\":\n  Indicates where to find the complete URL for the next page\n\n- NOT USED for other pagination methods\n\n**Implementation guidance**\n\n**For token-based pagination (method=\"token\")**\n\n1. When pathLocation=\"body\":\n    - Set to a JSON path that points to the token in the response body\n    - Uses dot notation to navigate JSON objects\n    \n    Example response:\n    ```json\n    {\n      \"data\": [...],\n      \"meta\": {\n        \"nextToken\": \"abc123\"\n      }\n    }\n    ```\n    Correct path: \"meta.nextToken\"\n\n2. When pathLocation=\"header\":\n    - Set to the exact name of the HTTP header containing the token\n    - Case-sensitive, must match the header exactly\n    \n    Example header:\n    ```\n    X-Pagination-Token: abc123\n    ```\n    Correct path: \"X-Pagination-Token\"\n\n**For next page url pagination (method=\"nextpageurl\")**\n\n- Set to a JSON path that points to the complete URL in the response\n\nExample response:\n```json\n{\n  \"data\": [...],\n  \"pagination\": {\n    \"next_url\": \"https://api.example.com/data?page=2\"\n  }\n}\n```\nCorrect path: \"pagination.next_url\"\n\n**Common error** PATTERNS\n\n1. Missing dot notation: \"meta.nextToken\" not \"meta/nextToken\"\n2. Incorrect case: \"Meta.NextToken\" when API returns \"meta.nextToken\"\n3. Missing array indices when needed: \"items[0].next\" not \"items.next\"\n"},"pathLocation":{"type":"string","description":"Specifies where to find the pagination token in the API response.\n\n**Field behavior**\n\n- REQUIRED for method=\"token\"\n- NOT USED for other pagination methods\n- LIMITED to two possible values: \"body\" or \"header\"\n\n**Implementation guidance**\n\nWhen using token-based pagination, you must:\n1. Set method=\"token\"\n2. Set path to locate the token\n3. Set pathLocation to indicate where the token is found\n\n**When to use** \"body\":\n\nSet to \"body\" when the token is contained in the JSON response body.\nThis is the most common scenario for modern APIs.\n\nExample configuration:\n```json\n{\n  \"method\": \"token\",\n  \"path\": \"metadata.nextToken\",\n  \"pathLocation\": \"body\"\n}\n```\n\n**When to use \"header\"**\n\nSet to \"header\" when the token is returned as an HTTP header.\n\nExample configuration:\n```json\n{\n  \"method\": \"token\",\n  \"path\": \"X-Next-Page-Token\",\n  \"pathLocation\": \"header\" \n}\n```\n\n**Dependency chain**\n\nThis field participates in a critical dependency chain:\n\n1. Set method=\"token\"\n2. Set pathLocation=\"body\" or \"header\"\n3. Set path to token location based on pathLocation value\n4. Add {{export.http.paging.token}} to URI or body parameters\n\nAll four elements must be properly configured for token pagination to work.\n","enum":["body","header"]},"pathAfterFirstRequest":{"type":"string","description":"Specifies an alternative path for token extraction after the first page request.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"token\"\n- OPTIONAL: Only needed when token location changes after first page\n- Uses same format as the path field (JSON path or header name)\n\n**Implementation guidance**\n\nThis field should only be set when the API changes its response structure between\nthe first page and subsequent pages. Most APIs maintain consistent structure, but\nsome APIs may:\n\n1. Use different response formats for first vs. subsequent pages\n2. Move the token to a different location after the initial response\n3. Change the field name for the token in follow-up responses\n\nExample scenario where this is needed:\n```json\n// First page response:\n{\n  \"data\": [...],\n  \"meta\": {\n    \"initialNextToken\": \"abc123\"\n  }\n}\n\n// Subsequent page responses:\n{\n  \"data\": [...],\n  \"pagination\": {\n    \"nextToken\": \"def456\"\n  }\n}\n```\n\nIn this case:\n- path = \"meta.initialNextToken\" (for first page)\n- pathAfterFirstRequest = \"pagination.nextToken\" (for subsequent pages)\n\n**Dependency chain**\n\nThis field works in conjunction with the main path field:\n1. First request: token is extracted using the path field\n2. Subsequent requests: token is extracted using pathAfterFirstRequest\n\nIMPORTANT: Only set this field if you've verified that the API actually changes\nits response structure. Setting it unnecessarily can cause pagination to fail.\n"},"relativeURI":{"type":"string","description":"Override relative URI for subsequent page requests. This field appears as \"Override relative URI for subsequent page requests\" in the UI.\n\nThis field only needs to be set if subsequent page requests require a different relative URI than what is configured in the primary relative URI field. Most APIs use the same endpoint for all pages and vary only the query parameters, but some may require a completely different path for subsequent requests.\n\nYou can use handlebars expressions to reference data from the previous API response using the `previous_page` context object, which contains:\n\n- `previous_page.full_response` - The entire JSON response body from the previous request\n- `previous_page.last_record` - The last record from the previous page of results\n- `previous_page.headers` - All HTTP headers from the previous response\n\nCommon patterns include:\n- `{{previous_page.full_response.next_page}}` - Use a complete next page URL returned by the API\n- `/customers?page={{previous_page.full_response.page_count}}` - Use a page number from the response\n- `/orders?cursor={{previous_page.full_response.next_cursor}}` - Use a cursor/token from the response\n\nThe exact structure of data available depends on your specific API's response format.\n\nLeave this field empty if the main relative URI can be used for all page requests.\n"},"body":{"type":"string","description":"Override HTTP request body for subsequent page requests. This field appears as \"Override HTTP request body for subsequent page requests\" in the UI.\n\nThis field only needs to be set if subsequent page requests require a different HTTP request body than what is configured in the primary HTTP request body field. Most APIs use query parameters for pagination, but some (especially GraphQL or SOAP APIs) may require pagination parameters to be sent in the request body.\n\nYou can use handlebars expressions to reference data from the previous API response using the `previous_page` context object, which contains:\n\n- `previous_page.full_response` - The entire JSON response body from the previous request\n- `previous_page.last_record` - The last record from the previous page of results\n- `previous_page.headers` - All HTTP headers from the previous response\n\nCommon patterns include:\n- Including the next cursor in a GraphQL query: `{\"query\": \"...\", \"variables\": {\"cursor\": \"{{previous_page.full_response.pageInfo.endCursor}}\"}}`\n- Using the last record's ID: `{\"after\": \"{{previous_page.last_record.id}}\", \"limit\": 100}`\n- Including a page number: `{\"page\": {{previous_page.full_response.meta.next_page}}, \"pageSize\": 50}`\n\nThe exact structure of data available depends on your specific API's response format.\n\nLeave this field empty if the main HTTP request body can be used for all page requests.\n"},"linkHeaderRelation":{"type":"string","description":"Specifies which relation in the Link header to use for pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"linkheader\"\n- OPTIONAL: Defaults to \"next\" if not provided\n- Case-sensitive value matching the rel attribute in Link header\n\n**Implementation guidance**\n\nLink header pagination follows the RFC 5988 standard where pagination links\nare provided in HTTP headers. A typical Link header looks like:\n\n```\nLink: <https://api.example.com/items?page=2>; rel=\"next\", <https://api.example.com/items?page=1>; rel=\"prev\"\n```\n\nThis field allows you to specify which relation type to follow for pagination:\n\n```\n\"linkHeaderRelation\": \"next\"  // Default value\n```\n\nSome APIs use non-standard relation names, which is when you'd need to change this:\n\n```\n\"linkHeaderRelation\": \"successor\"  // Custom relation name\n```\n\n**Common values**\n\n- \"next\" (default): Standard for most RFC 5988 compliant APIs\n- \"successor\": Alternative used by some APIs\n- \"forward\": Alternative used by some APIs\n- \"nextpage\": Non-standard but used by some implementations\n\nIMPORTANT: This is case-sensitive and must exactly match the relation value in\nthe Link header. If the API includes the prefix \"rel=\" in the header, do NOT\ninclude it here.\n"},"resourcePath":{"type":"string","description":"Override path to records for subsequent page requests. This field appears as \"Override path to records for subsequent page requests\" in the UI.\n\nThis field only needs to be set if subsequent page requests return a different response structure, and the records are located in a different place than the original request.\n\nFor example, if the first request returns records in a structure like {\"data\": [...]} but subsequent page responses have records in {\"results\": [...]} instead, you would set this field to \"results\" to correctly extract data from the follow-up pages.\n\nLeave this field empty if all pages use the same response structure.\n"},"lastPageStatusCode":{"type":"integer","description":"Specifies a custom HTTP status code that indicates the last page of results.\n\n**Field behavior**\n\n- OPTIONAL: Only needed for APIs with non-standard last page indicators\n- Applies to all pagination methods\n- Overrides the default 404 end-of-pagination detection\n\n**Implementation guidance**\n\nBy default, the system treats a 404 status code as an indicator that\npagination is complete. This field allows you to specify a different\nstatus code if your API uses an alternative convention.\n\nCommon scenarios where this is needed:\n\n1. APIs that return 204 (No Content) for empty result sets\n```\n\"lastPageStatusCode\": 204\n```\n\n2. APIs that return 400 (Bad Request) when requesting beyond available pages\n```\n\"lastPageStatusCode\": 400\n```\n\n3. APIs with custom error codes for pagination completion\n```\n\"lastPageStatusCode\": 499\n```\n\n**Technical details**\n\nWhen this status code is received, the system:\n- Stops the pagination process\n- Considers the data collection complete\n- Does not treat the response as an error\n- Does not attempt to process any response body\n\nIMPORTANT: Only set this if your API explicitly uses a non-404 status code\nto indicate the end of pagination. Setting this incorrectly could cause\npremature termination of data collection or error handling issues.\n"},"lastPagePath":{"type":"string","description":"Specifies a JSON path to a field that indicates the end of pagination.\n\n**Field behavior**\n\n- OPTIONAL: Only needed for APIs with field-based pagination completion signals\n- Works with all pagination methods\n- Used in conjunction with lastPageValues\n- JSON path notation to a field in the response body\n\n**Implementation guidance**\n\nThis field is used when an API indicates the last page through a field\nin the response body rather than using HTTP status codes. The system\nchecks this path in each response to determine if pagination is complete.\n\nCommon patterns include:\n\n1. Boolean flag fields\n```\n\"lastPagePath\": \"meta.isLastPage\"\n```\n\n2. \"Has more\" indicators\n```\n\"lastPagePath\": \"pagination.hasMore\"\n```\n\n3. Cursor/token fields that are null/empty on the last page\n```\n\"lastPagePath\": \"meta.nextCursor\"\n```\n\n4. Error message fields\n```\n\"lastPagePath\": \"error.message\"\n```\n\n**Dependency chain**\n\nThis field must be used with lastPageValues, which specifies the value(s)\nat this path that indicate pagination is complete. For example:\n\n```json\n\"lastPagePath\": \"pagination.hasMore\",\n\"lastPageValues\": [\"false\", \"0\"]\n```\n\nIMPORTANT: The path is evaluated against each response using JSON path notation.\nIf the path doesn't exist in the response, the condition is not considered met.\n"},"lastPageValues":{"type":"array","description":"Specifies which value(s) at the lastPagePath indicate the end of pagination.\n\n**Field behavior**\n\n- REQUIRED when lastPagePath is used\n- Array of string values (even for boolean or numeric comparisons)\n- Case-sensitive matching against the value at lastPagePath\n- Multiple values create an OR condition (any match indicates last page)\n\n**Implementation guidance**\n\nThis field works in conjunction with lastPagePath to determine when\npagination is complete. The system looks for the field specified by\nlastPagePath and compares its value against each entry in this array.\n\nCommon patterns include:\n\n1. For boolean \"isLastPage\" flags (true means last page)\n```json\n\"lastPagePath\": \"meta.isLastPage\",\n\"lastPageValues\": [\"true\"]\n```\n\n2. For \"hasMore\" flags (false means last page)\n```json\n\"lastPagePath\": \"pagination.hasMore\",\n\"lastPageValues\": [\"false\", \"0\"]\n```\n\n3. For empty cursors (null/empty string means last page)\n```json\n\"lastPagePath\": \"meta.nextCursor\",\n\"lastPageValues\": [\"null\", \"\"]\n```\n\n4. For specific error messages\n```json\n\"lastPagePath\": \"error.message\",\n\"lastPageValues\": [\"No more pages\", \"End of results\"]\n```\n\n**Technical details**\n\n- All values must be specified as strings, even for boolean or numeric comparisons\n- JSON null should be represented as the string \"null\"\n- Empty string is represented as \"\"\n- The comparison is exact and case-sensitive\n\nIMPORTANT: This field is only considered when the lastPagePath exists in the\nresponse. Both lastPagePath and lastPageValues must be configured correctly\nfor proper pagination termination.\n","items":{"type":"string"}},"maxPagePath":{"type":"string","description":"Specifies a JSON path to a field containing the total number of pages available.\n\n**Field behavior**\n\n- OPTIONAL: Only relevant for \"page\" and \"skip\" pagination methods\n- JSON path to a numeric field in the response\n- Used to optimize pagination by detecting the last page early\n- Ignored for other pagination methods\n\n**Implementation guidance**\n\nThis field enables pagination optimization when an API includes metadata\nabout the total number of pages. When configured, the system:\n\n1. Extracts the total page count from each response\n2. Compares the current page number against this total\n3. Stops pagination when the maximum page is reached\n\nCommon API response patterns include:\n\n```json\n// Pattern 1: Metadata section with page counts\n{\n  \"data\": [...],\n  \"meta\": {\n    \"totalPages\": 5,\n    \"currentPage\": 2\n  }\n}\n\n// Pattern 2: Pagination object\n{\n  \"results\": [...],\n  \"pagination\": {\n    \"pageCount\": 5,\n    \"page\": 2\n  }\n}\n\n// Pattern 3: Root level pagination info\n{\n  \"items\": [...],\n  \"pages\": 5,\n  \"current\": 2\n}\n```\n\n**Usage scenarios**\n\nMost useful when:\n- The API reliably includes total page counts\n- You want to prevent unnecessary requests after the last page\n- The 404/last page detection mechanisms aren't suitable\n\nIMPORTANT: This field should point to the TOTAL number of pages,\nnot the current page number. The value must be numeric (integer).\n"},"maxCountPath":{"type":"string","description":"Specifies a JSON path to a field containing the total number of records available.\n\n**Field behavior**\n\n- OPTIONAL: Only relevant for \"page\" and \"skip\" pagination methods\n- JSON path to a numeric field in the response\n- Alternative to maxPagePath for record-based termination\n- Used when APIs provide total record count instead of page count\n\n**Implementation guidance**\n\nThis field enables pagination optimization when an API includes metadata\nabout the total number of records rather than pages. When configured,\nthe system:\n\n1. Extracts the total record count from each response\n2. Tracks the total number of records processed so far\n3. Stops pagination when all records have been processed\n\nCommon API response patterns include:\n\n```json\n// Pattern 1: Metadata section with record counts\n{\n  \"data\": [...],\n  \"meta\": {\n    \"totalCount\": 42,\n    \"page\": 2,\n    \"pageSize\": 10\n  }\n}\n\n// Pattern 2: Pagination object\n{\n  \"results\": [...],\n  \"pagination\": {\n    \"total\": 42,\n    \"offset\": 20,\n    \"limit\": 10\n  }\n}\n\n// Pattern 3: Root level count info\n{\n  \"items\": [...],\n  \"count\": 42,\n  \"page\": 2\n}\n```\n\n**Relationship with maxPagePath**\n\nThis field is an alternative to maxPagePath:\n- Use maxPagePath when the API provides a total page count\n- Use maxCountPath when the API provides a total record count\n- If both are provided, maxPagePath takes precedence\n\nIMPORTANT: This field should point to the TOTAL number of records,\nnot the number of records in the current page. The value must be\nnumeric (integer).\n"}}},"response":{"type":"object","description":"Configuration for parsing and interpreting HTTP responses returned by the source API.\n\nThis object tells the export engine how to extract records from the API response body\nand how to detect success or failure at the response level.\n\n**Most important field:** resourcePath\n\n`resourcePath` is the single most commonly needed field in this object. When an API\nwraps its records inside a JSON envelope, you MUST set resourcePath to the dot-path\nthat points to the array of records. Without it, the export treats the entire response\nas a single record.\n\nExample API response:\n```json\n{\n  \"status\": \"ok\",\n  \"data\": {\n    \"customers\": [\n      {\"id\": 1, \"name\": \"Alice\"},\n      {\"id\": 2, \"name\": \"Bob\"}\n    ]\n  }\n}\n```\n→ Set `resourcePath` to `data.customers` so the export produces 2 records.\n\n**When to leave this object undefined**\n\nIf the API returns a bare JSON array (e.g. `[{\"id\":1}, {\"id\":2}]`) with no\nwrapper object, you do not need this object at all.\n","properties":{"resourcePath":{"type":"string","description":"The dot-separated path to the array of records inside the API response body.\n\n**Critical field for correct data extraction**\n\nMost APIs wrap their data in an envelope object. This field tells the export\nwhere to find the actual records within that envelope. Without this field,\nthe export treats the entire response body as a single record, which is\nalmost never the desired behavior when the response has a wrapper.\n\n**How it works**\n\nGiven an API response like:\n```json\n{\n  \"meta\": {\"page\": 1, \"total\": 42},\n  \"results\": [\n    {\"id\": \"A\", \"value\": 10},\n    {\"id\": \"B\", \"value\": 20}\n  ]\n}\n```\nSetting `resourcePath` to `results` causes the export to produce 2 records\n(`{\"id\":\"A\",\"value\":10}` and `{\"id\":\"B\",\"value\":20}`).\n\nFor deeply nested responses:\n```json\n{\n  \"slideshow\": {\n    \"slides\": [{\"title\": \"Slide 1\"}, {\"title\": \"Slide 2\"}]\n  }\n}\n```\nSet `resourcePath` to `slideshow.slides` to get each slide as a record.\n\n**When to set this field**\n\n- The API response is a JSON object (not a bare array) and the records are\n  nested inside it → set this to the path\n- The API response is a bare JSON array → leave undefined (records are\n  already at the top level)\n\n**Common patterns**\n\n| API response structure | resourcePath value |\n|---|---|\n| `{\"data\": [...]}` | `data` |\n| `{\"results\": [...]}` | `results` |\n| `{\"items\": [...]}` | `items` |\n| `{\"records\": [...]}` | `records` |\n| `{\"response\": {\"data\": [...]}}` | `response.data` |\n| `{\"slideshow\": {\"slides\": [...]}}` | `slideshow.slides` |\n| `[...]` (bare array) | leave undefined |\n\n**Important distinction**\n\nThis field extracts records from the **API response**. Do NOT confuse it with:\n- `oneToMany` + `pathToMany` — which unwrap child arrays from *input records*\n  in lookup/import steps (a completely different mechanism)\n- `paging.resourcePath` — which overrides the record location for *subsequent*\n  page requests only (when follow-up pages use a different response structure)\n"},"resourceIdPath":{"type":"string","description":"Path to the unique identifier field within each individual record in the response.\n\nUsed primarily when processing results of asynchronous import responses.\nIf not specified, the system looks for standard `id` or `_id` fields automatically.\n"},"successPath":{"type":"string","description":"Path to a field in the response that indicates whether the API call succeeded.\n\nUse this when the API returns HTTP 200 for all requests but signals success or\nfailure through a field in the response body.\n\nMust be used together with `successValues` to define which values at this path\nindicate success.\n\nExample: If the API returns `{\"status\": \"ok\", \"data\": [...]}`, set\n`successPath` to `status` and `successValues` to `[\"ok\"]`.\n"},"successValues":{"type":"array","items":{"type":"string"},"description":"Values at the `successPath` location that indicate the API call was successful.\n\nWhen the value at `successPath` matches any entry in this array, the response\nis treated as successful. If the value does not match, the response is treated\nas an error.\n\nAll values are compared as strings. For boolean fields, use `\"true\"` or `\"false\"`.\n"},"errorPath":{"type":"string","description":"Path to the error message field in the response body.\n\nUsed to extract a meaningful error message when the API returns an error\nresponse. The value at this path is included in error logs and error records.\n"},"failPath":{"type":"string","description":"Path to a field that identifies a failed response even when the HTTP status code is 200.\n\nSimilar to `successPath` but inverted logic — checks for failure indicators.\nMust be used together with `failValues`.\n"},"failValues":{"type":"array","items":{"type":"string"},"description":"Values at the `failPath` location that indicate the API call failed.\n\nWhen the value at `failPath` matches any entry in this array, the response\nis treated as a failure even if the HTTP status code was 200.\n"},"blobFormat":{"type":"string","description":"Character encoding for blob export responses.\n\nOnly relevant when the export type is \"blob\" (http.type = \"file\" or\nexport type = \"blob\"). Specifies how to decode the binary response body.\n","enum":["utf8","ucs2","utf-16le","ascii","binary","base64","hex"]}}},"_httpConnectorVersionId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector version being used."},"_httpConnectorResourceId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector resource being used."},"sendAuthForFileDownloads":{"type":"boolean","description":"Whether to include authentication headers when downloading files."},"_httpConnectorEndpointId":{"type":"string","format":"objectId","description":"_httpConnectorEndpointId: Identifier for the HTTP connector endpoint used in the integration or API call. This identifier uniquely specifies which HTTP connector endpoint configuration should be utilized to route and process the request within the system. It ensures that API calls are directed to the correct external or internal HTTP service endpoint as defined in the integration setup.\n\n**Field behavior**\n- Uniquely identifies the HTTP connector endpoint within the system to ensure precise routing.\n- Used during the execution of integrations or API calls to select the appropriate HTTP endpoint configuration.\n- Typically required and must be specified when configuring or invoking HTTP-based connectors.\n- Once set, it generally remains immutable to maintain consistent routing behavior.\n- Acts as a key reference to the endpoint’s configuration, including URL, headers, authentication, and other settings.\n\n**Implementation guidance**\n- Must correspond to a valid and existing HTTP connector endpoint identifier within the system.\n- Validate the identifier against the current list of configured HTTP connector endpoints before use to prevent errors.\n- Avoid changing the identifier after initial assignment to prevent routing inconsistencies.\n- Ensure that the endpoint configuration referenced by this ID is active and properly configured.\n- Implement error handling for cases where the identifier does not match any existing endpoint.\n\n**Examples**\n- \"endpoint_12345\"\n- \"httpConnectorEndpoint_abcde\"\n- \"prodHttpEndpoint01\"\n- \"dev_api_gateway_2024\"\n- \"externalServiceEndpoint-01\"\n\n**Important notes**\n- This field is critical for directing API calls to the correct HTTP endpoint; incorrect values can cause failed connections or routing errors.\n- Missing or invalid identifiers will typically result in integration failures or exceptions.\n- Proper permissions and access controls must be in place to use the specified HTTP connector endpoint.\n- Changes to the endpoint configuration referenced by this ID may affect all integrations relying on it.\n- The identifier should be managed carefully in deployment and version control processes.\n\n**Dependency chain**\n- Depends on the existence and configuration of HTTP connector endpoints within the system.\n- May be linked to authentication, authorization, and security settings associated with the endpoint.\n- Integration logic or API call workflows depend on this identifier to resolve the correct HTTP endpoint.\n- Changes in endpoint configurations or identifiers may require updates in dependent integrations.\n\n**Technical details**\n- Data type: String\n- Format: Alphanumeric identifier, typically allowing underscores (_), dashes (-), and possibly other URL-safe characters.\n- Stored as a reference or foreign key linking to the HTTP connector"}}},"Salesforce":{"type":"object","description":"Configuration object for Salesforce data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a Salesforce connection\nand must not be included for other connection types. It defines how data is extracted\nfrom Salesforce, either through queries or real-time events.\n\nFor optimal AI agent implementation, consider these guidelines:\n\n**Salesforce export modes**\n\nSalesforce exports offer two fundamentally different operating modes:\n\n1. **SOQL Query-based Exports** (type=\"soql\")\n    - Scheduled or on-demand batch processing\n    - Uses SOQL queries to retrieve data\n    - Supports both REST and Bulk API\n    - Can be configured as lookups (isLookup=true)\n    - Requires the \"soql\" object with query configuration\n\n2. **Real-time Event Listeners** (type=\"distributed\")\n    - Responds to Salesforce events as they happen\n    - Uses Salesforce's streaming API and platform events\n    - Always appears as a \"Listener\" in the flow builder UI\n    - Requires the \"distributed\" object with event configuration\n\n3. **File/Blob Exports** (when export.type=\"blob\")\n    - Retrieves files stored in Salesforce\n    - Requires sObjectType and id fields\n    - Supports Attachments, ContentVersion, and Document objects\n\n**Implementation requirements**\n\nThe salesforce object has conditional requirements based on the selected type:\n\n- For SOQL exports (type=\"soql\"):\n  Required fields: type, soql.query\n  Optional fields: api, includeDeletedRecords, bulk (when api=\"bulk\")\n\n- For Distributed exports (type=\"distributed\"):\n  Required fields: type, distributed configuration\n  Optional fields: distributed.referencedFields, distributed.qualifier\n\n- For Blob exports (when export.type=\"blob\"):\n  Required fields: sObjectType, id\n","properties":{"type":{"type":"string","description":"Defines the fundamental data extraction method for Salesforce exports.\n\n    **Field behavior**\n\nThis field determines the core operating mode of the Salesforce export:\n\n- REQUIRED for all Salesforce exports\n- Controls which additional configuration objects must be provided\n- Affects how the export appears and functions in the flow builder UI\n- Cannot be changed after creation without significant reconfiguration\n\n**Available types**\n\n**Soql Query-based Export**\n```\n\"type\": \"soql\"\n```\n\n- **Behavior**: Executes SOQL queries against Salesforce on schedule or demand\n- **UI Appearance**: \"Export\" or \"Lookup\" based on isLookup value\n- **Required Config**: Must provide the \"soql\" object with a valid query\n- **Use Cases**: Batch data extraction, delta synchronization, data migration\n- **Dependencies**:\n  - Compatible with both \"rest\" and \"bulk\" API options\n  - Works with standard, delta, test, and once export types\n\n**Real-time Event Listener**\n```\n\"type\": \"distributed\"\n```\n\n- **Behavior**: Listens for real-time Salesforce events (create/update/delete)\n- **UI Appearance**: Always appears as a \"Listener\" in the flow builder\n- **Required Config**: Must provide the \"distributed\" object with event configuration\n- **Use Cases**: Real-time synchronization, event-driven integration\n- **Dependencies**:\n  - Only uses REST API (api field is ignored)\n  - Automatically configured with trigger logic in Salesforce\n  - Only compatible with standard export type (ignores delta/test/once)\n\n**Implementation considerations**\n\nThe type selection creates a fundamental difference in how data flows:\n\n- \"soql\" operates on a pull model where the integration initiates data retrieval\n- \"distributed\" operates on a push model where Salesforce events trigger the integration\n\nIMPORTANT: Choose \"soql\" for batch processing and lookups; choose \"distributed\" for\nreal-time event handling. This decision affects all other configuration aspects.\n","enum":["soql","distributed"]},"sObjectType":{"type":"string","description":"Specifies the Salesforce object type for the export operation.\n\n**Field behavior**\n\nThis field determines which Salesforce object is being exported:\n\n- **REQUIRED** when the parent export's type is \"distributed\"\n- **REQUIRED** when the parent export's type is \"blob\"\n- Optional for \"soql\" exports (can be inferred from the SOQL query)\n- Must be a valid Salesforce object API name\n\n**Use cases by export type**\n\n**Distributed Exports**\n```\n\"sObjectType\": \"Account\"\n\"sObjectType\": \"Contact\"\n\"sObjectType\": \"Opportunity\"\n\"sObjectType\": \"Custom_Object__c\"\n```\n\n- **Purpose**: Specifies the primary object type being exported from Salesforce\n- **Valid Values**: Any standard or custom Salesforce object (Account, Contact, Opportunity, Lead, Case, Custom_Object__c, etc.)\n- **API Access**: Uses the specified object's metadata and SOQL/REST APIs\n- **Use Cases**: Real-time distributed processing of Salesforce records\n- **Requirements**: Object must exist in the connected Salesforce org\n\n**Blob/File Exports**\n```\n\"sObjectType\": \"Attachment\"\n\"sObjectType\": \"ContentVersion\"\n\"sObjectType\": \"Document\"\n```\n\n- **Purpose**: Specifies which Salesforce file storage object contains the file data\n- **Valid Values**: File storage objects only (Attachment, ContentVersion, Document)\n- **API Access**: Uses file-specific APIs for data retrieval\n- **Use Cases**: Extracting files and binary data from Salesforce\n- **Requirements**: Must be used with the \"id\" field to specify the file record\n\n**Soql Exports**\n```\n\"sObjectType\": \"Account\"  // Optional - can be inferred from query\n```\n\n- **Purpose**: Optional hint about the primary object in the SOQL query\n- **Valid Values**: Any Salesforce object referenced in the query\n- **Use Cases**: Query optimization and metadata context\n\n**Implementation notes**\n\nFor distributed exports, this field is essential for:\n- Setting up proper event listeners and triggers\n- Configuring field metadata and validation\n- Enabling related object processing\n- Determining appropriate API endpoints\n\nFor blob exports, this field works with the \"id\" field to retrieve specific file records.\n\nIMPORTANT: The object specified must exist in the target Salesforce org and be accessible\nto the integration user account.\n"},"id":{"type":"string","description":"Specifies the Salesforce record ID of the file to retrieve for blob exports.\n\n**Field behavior**\n\nThis field identifies the specific file record in Salesforce:\n\n- REQUIRED when the parent export's type is \"blob\"\n- Must be a valid Salesforce ID or a handlebars expression\n- Used in conjunction with sObjectType to retrieve the file\n- Not used for regular data exports (type=\"soql\" or \"distributed\")\n\n**Implementation patterns**\n\n**Static File id**\n```\n\"id\": \"00P5f00000ZQcTZEA1\"\n```\n\n- References a specific, fixed file in Salesforce\n- Useful for retrieving standard documents or templates\n- Always retrieves the same file on each execution\n- Simple to configure but lacks flexibility\n\n**Dynamic File id (Handlebars)**\n```\n\"id\": \"{{record.Attachment_Id__c}}\"\n```\n\n- References a file ID from input data using handlebars\n- Requires the export to be used as a lookup (isLookup=true)\n- Dynamically determines which file to retrieve at runtime\n- Allows for contextual file retrieval based on previous steps\n\n**Technical details**\n\n- For ContentVersion objects, this should be the ContentVersion ID\n- For Attachment objects, this should be the Attachment ID\n- For Document objects, this should be the Document ID\n\nIMPORTANT: Salesforce IDs are 15 or 18 characters, case-sensitive for 15-character\nversions, and case-insensitive for 18-character versions. When using handlebars,\nensure the referenced field contains a valid Salesforce ID.\n"},"includeDeletedRecords":{"type":"boolean","description":"Controls whether the export retrieves records from the Salesforce Recycle Bin.\n\n**Field behavior**\n\nThis field enables access to recently deleted records:\n\n- OPTIONAL: Defaults to false if not specified\n- Only relevant for SOQL exports (type=\"soql\")\n- Ignored for distributed exports and blob exports\n- Changes the underlying API method used for queries\n\n**Implementation impact**\n\nWhen set to true:\n- Salesforce's queryAll() API method is used instead of query()\n- Records in the Recycle Bin (deleted within the past 15 days) are included\n- Each record contains an \"IsDeleted\" field to identify deleted status\n- API usage may be higher as queryAll() counts differently against limits\n\n**Use cases**\n\nThis field is particularly useful for:\n\n- Synchronizing deletion operations to target systems\n- Building data recovery/rollback mechanisms\n- Maintaining a complete audit trail including deleted records\n- Implementing soft-delete patterns across integrated systems\n\n**Technical considerations**\n\n- Records in the Recycle Bin are only available for up to 15 days\n- Hard-deleted records (emptied from Recycle Bin) are not accessible\n- The IsDeleted field should be checked to identify deleted records\n- May increase response size and processing time slightly\n\nIMPORTANT: This feature only works with SOQL exports (type=\"soql\") and is ignored\nfor distributed exports (type=\"distributed\") since those operate on events rather\nthan queries.\n","default":false},"api":{"type":"string","description":"Specifies which Salesforce API to use for retrieving data.\n\n**Field behavior**\n\nThis field controls the underlying API technology:\n\n- OPTIONAL: Defaults to \"rest\" if not specified\n- Only relevant for SOQL exports (type=\"soql\")\n- Ignored for distributed exports and blob exports\n- Determines performance characteristics and compatibility\n\n**Available APIs**\n\n**Rest api**\n```\n\"api\": \"rest\"\n```\n\n- **Performance**: Optimized for immediate response and smaller datasets\n- **Concurrency**: Higher - multiple queries can run simultaneously\n- **Data Volume**: Best for <10,000 records\n- **Use Cases**: Lookups, real-time queries, smaller datasets\n- **Special Features**: Required for lookup exports (isLookup=true)\n\n**Bulk api 2.0**\n```\n\"api\": \"bulk\"\n```\n\n- **Performance**: Optimized for large data volumes, higher throughput\n- **Concurrency**: Lower - utilizes a job queuing system\n- **Data Volume**: Best for >=10,000 records\n- **Use Cases**: Large data migrations, full dataset exports, reports\n- **Special Features**: Requires \"bulk\" object configuration for settings\n\n**Dependencies and constraints**\n\n- When isLookup=true, api must be set to \"rest\" (or left as default)\n- When api=\"bulk\", the bulk object can be configured for additional options\n- Bulk API introduces slight processing latency but handles larger volumes\n- REST API provides immediate results but may time out with very large queries\n\n**Selection guidance**\n\nChoose based on your data volume and response time needs:\n\n- For smaller datasets (<10,000 records) or lookups: use \"rest\"\n- For larger datasets or background processing: use \"bulk\"\n- When immediacy is critical: use \"rest\"\n- When throughput is critical: use \"bulk\"\n\nIMPORTANT: The Bulk API is not compatible with lookup exports (isLookup=true).\nIf your export is configured as a lookup, you must use the REST API.\n","enum":["rest","bulk"]},"bulk":{"type":"object","description":"Configuration parameters for Salesforce Bulk API 2.0 exports.\n\n**Field behavior**\n\nThis object contains settings specific to Bulk API operations:\n\n- REQUIRED when api=\"bulk\" and type=\"soql\"\n- Ignored when api=\"rest\" or type=\"distributed\"\n- Controls behavior of Salesforce Bulk API jobs\n- Provides optimization options for large data volumes\n\n**Implementation context**\n\nThe Bulk API operates differently from REST API:\n- Creates asynchronous jobs in Salesforce\n- Processes records in batches for higher throughput\n- Optimized for transferring large datasets\n- Has different governor limits and behavior\n","properties":{"maxRecords":{"type":"integer","description":"Specifies the maximum number of records to retrieve in a single Bulk API job.\n\n**Field behavior**\n\nThis field controls query result size:\n\n- OPTIONAL: Uses Salesforce's default if not specified\n- Sets the `maxRecords` parameter on Bulk API requests\n- Only applicable when api=\"bulk\" and type=\"soql\"\n- Helps prevent timeouts with complex queries or large record sizes\n\n**Technical considerations**\n\n- Different Salesforce editions have different limits\n- Values too high may cause timeouts with complex records\n- Values too low may require multiple API calls\n- Standard objects typically support higher limits than custom objects\n\n**Optimization guidance**\n\n- For simple records (few fields): Higher values improve throughput\n- For complex records (many fields): Lower values prevent timeouts\n- For standard objects: 50,000 is usually safe\n- For custom objects: 10,000-25,000 is recommended\n\nIMPORTANT: The Salesforce Bulk API 2.0 has a hard limit of 100 million records\nper job, but practical limits are typically much lower based on record complexity\nand Salesforce instance capacity.\n","minimum":10000},"purgeJobAfterExport":{"type":"boolean","description":"Controls whether Bulk API jobs are automatically deleted after completion.\n\n**Field behavior**\n\nThis field manages job cleanup:\n\n- OPTIONAL: Defaults to false if not specified\n- When true, deletes the Bulk API job after all data is retrieved\n- Only applicable when api=\"bulk\" and type=\"soql\"\n- Has no effect on the actual data retrieval or results\n\n**Implementation impact**\n\nWhen enabled (true):\n- Reduces clutter in the Salesforce Bulk Data Load Jobs UI\n- Prevents accumulation of completed jobs\n- May help stay under job retention limits\n- Makes job details unavailable for later troubleshooting\n\nWhen disabled (false):\n- Preserves job history for troubleshooting\n- Allows reviewing job details in Salesforce\n- May accumulate many jobs over time\n\n**Best practices**\n\n- For production environments: Set to true for cleanliness\n- For testing/development: Set to false for easier debugging\n- For audit-heavy environments: Set to false if job history is needed\n\nIMPORTANT: This setting only affects job metadata cleanup in Salesforce.\nIt has no impact on the actual data retrieved or the success of the export.\n"}}},"soql":{"type":"object","description":"Configuration for SOQL query-based Salesforce exports.\n\n**Field behavior**\n\nThis object contains the SOQL query settings:\n\n- REQUIRED when type=\"soql\"\n- Not used when type=\"distributed\" or for blob exports\n- Controls what data is retrieved from Salesforce\n- Works with both REST API and Bulk API methods\n\n**Implementation requirements**\n\nThe soql object must include a valid query that follows Salesforce SOQL syntax.\nThe query determines:\n- Which objects are accessed\n- Which fields are retrieved\n- What filtering conditions are applied\n- How results are sorted and limited\n","properties":{"query":{"type":"string","description":"The SOQL query that defines what data to retrieve from Salesforce.\n\n**Field behavior**\n\nThis field contains the actual SOQL statement:\n\n- REQUIRED when type=\"soql\"\n- Must follow Salesforce Object Query Language syntax\n- Passed directly to Salesforce API endpoints\n- Can include dynamic values via handlebars\n\n**Query structure elements**\n\nA complete SOQL query typically includes:\n\n**Field Selection**\n```\nSELECT Id, Name, Email, Phone, Account.Name\n```\n- List specific fields to retrieve\n- Include relationship fields using dot notation\n- Use * sparingly (only with specific sObjects that support it)\n\n**Object Selection**\n```\nFROM Contact\n```\n- Specifies the Salesforce object to query\n- Must be a valid API name (not label)\n- Case-sensitive (match Salesforce API names exactly)\n\n**Filter Conditions**\n```\nWHERE LastModifiedDate > {{lastExportDateTime}}\nAND IsActive = true\n```\n- Limits which records are returned\n- Can reference handlebars variables (e.g., for delta exports)\n- Supports standard operators (=, !=, >, <, LIKE, IN, etc.)\n\n**Relationship Queries**\n```\nSELECT Account.Id, (SELECT Id, FirstName FROM Contacts)\nFROM Account\n```\n- Retrieves parent and child records in a single query\n- Helps reduce API calls for related data\n- Supports both lookup and master-detail relationships\n\n**Implementation best practices**\n\n- Select only the fields you need (improves performance)\n- Use WHERE clauses to limit data volume\n- For delta exports, use LastModifiedDate with {{lastExportDateTime}}\n- Use ORDER BY for consistent results across multiple pages\n- Avoid SOQL functions in filters when using Bulk API\n\n**Technical limits**\n\n- Maximum query length: 20,000 characters\n- Maximum relationships traversed: 5 levels\n- Maximum subquery levels: 1 (no nested subqueries)\n- Maximum batch size varies by API (REST: 2,000, Bulk: 10,000+)\n\nIMPORTANT: When using relationship queries, child objects count against\ngovernor limits differently. For bulk processing of many parent-child records,\nconsider separate queries or the oneToMany export setting.\n","maxLength":200000}}},"distributed":{"type":"object","description":"Configuration for real-time Salesforce event-driven exports.\n\n**Field behavior**\n\nThis object defines real-time event listener settings:\n\n- REQUIRED when type=\"distributed\"\n- Not used when type=\"soql\" or for blob exports\n- Creates push-based integration triggered by Salesforce events\n- Implements real-time processing of creates, updates, and deletes\n\n**Implementation context**\n\nDistributed exports work fundamentally differently from SOQL exports:\n- No scheduling or manual execution required\n- Triggered automatically when records change in Salesforce\n- Data flows in real-time as events occur\n- Uses Salesforce's platform events and streaming API\n\n**Technical architecture**\n\nWhen configured, the system:\n1. Creates custom triggers in the connected Salesforce org\n2. Establishes event listeners for the specified objects\n3. Processes events as they occur (create/update/delete operations)\n4. Delivers the changed records to the integration flow\n","properties":{"referencedFields":{"type":"array","description":"Specifies additional fields to retrieve from related objects via relationships.\n\n**Field behavior**\n\nThis field extends the data retrieval beyond the primary object:\n\n- OPTIONAL: If omitted, only direct fields are retrieved\n- Each entry specifies a field on a related object using dot notation\n- Values are included in the exported record data\n- Only works with lookup and master-detail relationships\n\n**Implementation patterns**\n\n**Parent Object Fields**\n```\n[\"Account.Name\", \"Account.Industry\", \"Account.BillingCity\"]\n```\n- Retrieves fields from parent objects\n- Useful for including context from parent records\n- Common for child objects like Contacts, Opportunities\n\n**User/Owner Fields**\n```\n[\"Owner.Email\", \"CreatedBy.Name\", \"LastModifiedBy.Username\"]\n```\n- Retrieves fields from standard user relationship fields\n- Provides attribution information\n- Useful for auditing and notification scenarios\n\n**Custom Relationship Fields**\n```\n[\"Custom_Lookup__r.Field_Name__c\", \"Another_Relation__r.Status__c\"]\n```\n- Works with custom relationship fields\n- Uses __r suffix for the relationship name\n- Can access standard or custom fields on the related object\n\n**Technical considerations**\n\n- Maximum 10 unique referenced relationships per export\n- Each referenced field counts against Salesforce API limits\n- Fields must be accessible to the connected user\n- Performance impact increases with each additional relationship\n\nIMPORTANT: Referenced fields are retrieved via separate API calls,\nwhich can impact performance with large numbers of records or relationships.\nOnly include fields that are actually needed by your integration.\n","items":{"type":"string"}},"disabled":{"type":"boolean","description":"Controls whether this real-time event listener is active.\n\n**Field behavior**\n\nThis field enables/disables event processing:\n\n- OPTIONAL: Defaults to false if not specified\n- When true, prevents the export from processing any events\n- Preserves configuration while temporarily stopping execution\n- Can be toggled without removing the entire export\n\n**Use cases**\n\nThis field is particularly useful for:\n\n- Temporarily pausing real-time integration during maintenance\n- Testing event configuration without processing\n- Creating standby event handlers for disaster recovery\n- Controlling traffic during peak business periods\n\n**Implementation notes**\n\nWhen disabled (true):\n- Events are NOT queued - they are completely ignored\n- No data will flow through this export\n- The Salesforce triggers remain in place but are inactive\n- No impact on Salesforce performance or API limits\n\nIMPORTANT: When disabled, events that occur will NOT be processed retroactively\nwhen re-enabled. Consider using a delta export for catching up on missed changes\nafter extended disabled periods.\n"},"qualifier":{"type":"string","description":"A filter expression that determines which Salesforce events are processed.\n\n**Field behavior**\n\nThis field provides server-side filtering:\n\n- OPTIONAL: If omitted, all events for the object are processed\n- Uses Salesforce formula syntax for filtering\n- Evaluated before events are sent to the integration platform\n- Can reference any field on the triggering record\n\n**Implementation patterns**\n\n**Simple Field Comparisons**\n```\n\"Status__c = 'Approved'\"\n```\n- Processes events only when specific field values match\n- Most efficient filtering approach\n- Can use =, !=, >, <, >=, <= operators\n\n**Logical Conditions**\n```\n\"Amount > 1000 AND Status__c = 'New'\"\n```\n- Combines multiple conditions with AND, OR operators\n- Can use parentheses for complex grouping\n- Allows precise control over which events trigger the integration\n\n**Formula Functions**\n```\n\"CONTAINS(Description, 'Priority') OR ISCHANGED(Status__c)\"\n```\n- Uses Salesforce formula functions\n- ISCHANGED detects specific field modifications\n- ISNEW, ISDELETED detect record lifecycle events\n\n**Performance impact**\n\nThe qualifier is evaluated in Salesforce before sending events:\n- Reduces network traffic and processing\n- Lowers integration platform load\n- More efficient than filtering in a subsequent flow step\n- No additional API calls required\n\nIMPORTANT: The qualifier is evaluated using the Salesforce formula engine.\nUse valid Salesforce formula syntax and reference only fields that exist\non the primary object being monitored.\n"},"batchSize":{"type":"integer","description":"Controls how many records are processed together in each real-time batch.\n\n**Field behavior**\n\nThis field affects event processing efficiency:\n\n- OPTIONAL: Uses system default if not specified\n- Valid range: 4 to 200 records per batch\n- Affects how events are grouped before processing\n- Balance between latency and throughput\n\n**Performance considerations**\n\n**Smaller Batch Sizes (4-20)**\n```\n\"batchSize\": 10\n```\n- Lower latency - events processed more immediately\n- More overhead for small numbers of records\n- Better for time-sensitive operations\n- More resilient for complex record processing\n\n**Larger Batch Sizes (50-200)**\n```\n\"batchSize\": 100\n                    - Higher throughput - better efficiency for many records\n- Slight increase in processing delay\n- Better for high-volume operations\n- More efficient use of API calls and resources\n\n**Implementation guidance**\n\nChoose based on your volume and timing requirements:\n\n- For high-volume objects (many changes per minute): Use larger batches\n- For time-sensitive operations: Use smaller batches\n- For complex processing logic: Use smaller batches\n- For efficiency and throughput: Use larger batches\n\nIMPORTANT: The batch size doesn't limit how many records can be processed\nin total, only how they're grouped for processing. All events will eventually\nbe processed regardless of batch size.\n","minimum":4,"maximum":200},"skipExportFieldId":{"type":"string","description":"Specifies a boolean field that prevents integration loops in bidirectional sync.\n\n**Field behavior**\n\nThis field provides a loop prevention mechanism:\n\n- OPTIONAL: If omitted, no loop prevention is applied\n- Must reference a valid boolean/checkbox field on the object\n- Field must be updateable via the Salesforce API\n- System automatically manages the field's value\n\n**Implementation mechanism**\n\nThe loop prevention works as follows:\n\n1. When your integration updates a record in Salesforce\n2. The system temporarily sets this field to true\n3. The update triggers Salesforce's normal event system\n4. But events where this field is true are ignored\n5. The system automatically clears the field afterward\n\n**Use cases**\n\nThis field is critical for:\n\n- Bidirectional synchronization scenarios\n- Preventing infinite update loops\n- Implementing changes that flow both ways\n- Distinguishing between user changes and integration changes\n\n**Field requirements**\n\nThe field you specify must be:\n- A checkbox (Boolean) field in Salesforce\n- Created specifically for integration purposes\n- Not used by other business processes\n- Updateable by the integration user\n\nIMPORTANT: For bidirectional sync scenarios, this field is required.\nWithout it, updates from your integration would trigger events that\ncould create infinite loops between systems.\n"},"relatedLists":{"type":"array","description":"Configuration for retrieving child records related to the primary object.\n\n**Field behavior**\n\nThis field enables parent-child data synchronization:\n\n- OPTIONAL: If omitted, only the primary record is processed\n- Each array entry configures one related list/child object\n- Child records are included with their parent in the payload\n- Automatically retrieves child records when parent changes\n\n**Implementation context**\n\nThis feature allows you to:\n- Synchronize complete object hierarchies in real-time\n- Include child records when a parent record changes\n- Process parent-child data together in a single flow\n- Maintain relationships between objects across systems\n\n**Technical impact**\n\n- Each related list requires additional Salesforce API calls\n- Performance impact increases with each related list\n- Data volume can increase significantly with many children\n- Parent-child structures may require special handling in flows\n","items":{"type":"object","description":"Configuration for a single related list (child object) to include.\n\nEach object in this array defines how to retrieve one type of\nchild records related to the primary object. Multiple related lists\ncan be configured to retrieve different types of children.\n","properties":{"referencedFields":{"type":"array","description":"Specifies which fields to retrieve from the child records.\n\n**Field behavior**\n\nThis field selects child record fields:\n\n- REQUIRED for each related list configuration\n- Must contain valid API field names for the child object\n- Only listed fields will be retrieved from child records\n- Empty array will retrieve only Id field\n\n**Implementation guidance**\n\n- Include only fields needed by your integration\n- Always include key identifier fields\n- Consider relationship fields if needed\n- Balance between completeness and performance\n\nIMPORTANT: Each field increases data volume and processing time.\nOnly include fields that your integration actually needs to process.\n","items":{"type":"string"}},"parentField":{"type":"string","description":"Specifies the field on the child object that relates back to the parent.\n\n**Field behavior**\n\nThis field identifies the relationship:\n\n- REQUIRED for each related list configuration\n- Must be a lookup or master-detail field on the child object\n- References the parent object being exported\n- Used to construct the relationship query\n\n**Relationship field patterns**\n\n**Standard Relationships**\n```\n\"parentField\": \"AccountId\"\n```\n- For standard parent-child relationships\n- Field name typically ends with \"Id\"\n- References standard objects\n\n**Custom Relationships**\n```\n\"parentField\": \"Parent_Object__c\"\n```\n- For custom parent-child relationships\n- Field name typically ends with \"__c\"\n- References custom objects\n\n**Technical details**\n\nThe system uses this field to construct a query like:\n```\nSELECT [referencedFields] FROM [sObjectType]\nWHERE [parentField] = [parent record Id]\n```\n\nIMPORTANT: This must be the exact API name of the field on the child\nobject that creates the relationship to the parent, not the relationship\nname itself.\n"},"sObjectType":{"type":"string","description":"Specifies the API name of the child object to retrieve.\n\n**Field behavior**\n\nThis field identifies the child object type:\n\n- REQUIRED for each related list configuration\n- Must be a valid Salesforce API object name\n- Case-sensitive (match Salesforce naming exactly)\n- Can be standard or custom object\n\n**Object name patterns**\n\n**Standard Objects**\n```\n\"sObjectType\": \"Contact\"\n```\n- Standard Salesforce objects\n- No namespace or suffix\n- First letter capitalized\n\n**Custom Objects**\n```\n\"sObjectType\": \"Custom_Object__c\"\n```\n- Custom Salesforce objects\n- API name with \"__c\" suffix\n- Case-sensitive, including underscores\n\n**Relationship compatibility**\n\nThe sObjectType must:\n- Have a relationship field to the parent object\n- Be accessible to the connected user\n- Support standard SOQL queries\n\nIMPORTANT: Use the exact API name of the object, not its label.\nThis value is case-sensitive and must match Salesforce's naming exactly.\n"},"filter":{"type":"string","description":"Optional SOQL WHERE clause to filter which child records are included.\n\n**Field behavior**\n\nThis field adds filtering to child record retrieval:\n\n- OPTIONAL: If omitted, all related child records are included\n- Contains only the condition expression (without \"WHERE\" keyword)\n- Uses standard SOQL syntax for conditions\n- Applied in addition to the parent relationship filter\n\n**Filtering patterns**\n\n**Simple Condition**\n```\n\"filter\": \"IsActive = true\"\n```\n- Basic field comparison\n- Only active related records are included\n\n**Multiple Conditions**\n```\n\"filter\": \"Status__c = 'Open' AND Priority = 'High'\"\n```\n- Combined conditions with logical operators\n- Only records matching all conditions are included\n\n**Complex Filtering**\n```\n\"filter\": \"CreatedDate > LAST_N_DAYS:30 OR IsClosed = false\"\n```\n- Can use Salesforce date literals and functions\n- Can mix different types of conditions\n\n**Technical details**\n\nThe system appends this to the automatically generated relationship query:\n```\nSELECT [fields] FROM [sObjectType]\nWHERE [parentField] = [parent ID] AND ([filter])\n```\n\nIMPORTANT: Do not include the \"WHERE\" keyword in this field.\nOnly include the condition expression itself, as it will be combined\nwith the parent relationship condition automatically.\n"},"orderBy":{"type":"string","description":"Optional SOQL ORDER BY clause to sort the child records.\n\n**Field behavior**\n\nThis field controls child record ordering:\n\n- OPTIONAL: If omitted, order is determined by Salesforce\n- Contains only field and direction (without \"ORDER BY\" keywords)\n- Uses standard SOQL syntax for sorting\n- Applied to the child records query\n\n**Ordering patterns**\n\n**Single Field Ascending (Default)**\n```\n\"orderBy\": \"Name\"\n```\n- Sorts by a single field in ascending order\n- ASC is implied if not specified\n\n**Single Field Descending**\n```\n\"orderBy\": \"CreatedDate DESC\"\n```\n- Sorts by a single field in descending order\n- Must explicitly specify DESC\n\n**Multiple Fields**\n```\n\"orderBy\": \"Priority DESC, CreatedDate ASC\"\n```\n- Sorts by multiple fields in specified directions\n- Comma-separated list of fields with optional directions\n\n**Technical details**\n\nThe system appends this to the automatically generated relationship query:\n```\nSELECT [fields] FROM [sObjectType]\nWHERE [parentField] = [parent ID]\nORDER BY [orderBy]\n```\n\nIMPORTANT: Do not include the \"ORDER BY\" keywords in this field.\nOnly include the field names and sort directions, as they will be\nadded to the query with the proper syntax automatically.\n"}}}}}}}},"AS2":{"type":"object","description":"Configuration for AS2 (Applicability Statement 2) exports and listeners.\n\n**What is AS2?**\n\nApplicability Statement 2 (AS2) is a widely adopted protocol for securely and reliably transmitting\nEDI and other data types over the internet using HTTP/S, S/MIME encryption, and digital signatures.\nAS2 provides:\n\n- **Message integrity** through digital signature validation\n- **Confidentiality** via encryption with X.509 certificates\n- **Non-repudiation** via Message Disposition Notifications (MDNs)\n\n**As2 export configuration**\n\nIMPORTANT: When the _connectionId field points to a connection where the type is as2,\nthis object MUST be populated for the export to function properly. This is a required configuration\nfor all AS2 based exports, as determined by the connection associated with the export.\n\n**As2 listener functionality**\n\nAn AS2 listener is a flow step in Celigo designed to receive incoming AS2 transmissions\nand deliver them into a defined integration flow. It acts as the \"source\" of a flow—similar to\nhow a webhook listener works—except it specifically handles AS2 protocol requirements, including\ndecryption, signature verification, and MDN generation.\n\nUnlike periodic polling or scheduled exports, an AS2 listener functions in near real-time—when\na trading partner pushes an AS2 message, Celigo's listener step processes it instantly,\ngenerating an MDN in response to acknowledge receipt. This ensures low-latency, event-driven\nprocessing where each inbound AS2 transmission triggers the integration flow automatically.\n","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"Reference to a TradingPartnerConnector document.\n\n**Trading partner connector overview**\n\nA Trading Partner Connector in Celigo's integrator.io is a prebuilt, partner-specific integration\ntemplate that streamlines the setup and management of Electronic Data Interchange (EDI) transactions\nwith a designated trading partner. It encapsulates all requisite configurations:\n\n- Communication protocol (e.g., AS2, FTP/SFTP, VAN)\n- Document schemas (such as ANSI X12 or EDIFACT)\n- Mappings\n- Validation rules\n- Endpoint details\n\n**Benefits**\n\nBy referencing a Trading Partner Connector through this field, organizations:\n\n- Reduce manual setup time\n- Ensure compliance with specific partner requirements\n- Take advantage of Celigo's out-of-the-box EDI capabilities\n- Process transactions reliably and securely\n- Onboard new partners rapidly without building flows from scratch\n\nThis field is crucial for AS2 configurations as it links the export to all partner-specific\nsettings required for successful AS2 communication.\n"},"blob":{"type":"boolean","description":"- **Behavior**: Retrieves raw files without parsing them into structured data records.  Should only be used when the contents of the file will not be used in subsequent steps.\n- **UI Appearance**: \"Transfer\" flow step\n- **Required Config**: Configuration only available on AS2 and VAN exports (as2.blob = true)\n- **Use Case**: Raw file transfers for binary files or when parsing is handled downstream\n- **Important Note**: Use this when you want to handle the file as a raw blob without automatic parsing\n"}},"required":[]},"DynamoDB":{"type":"object","description":"Configuration object for Amazon DynamoDB data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a DynamoDB connection\nand must not be included for other connection types. It defines how data is extracted\nfrom DynamoDB tables, using query operations against NoSQL data structures.\n\n**Implementation requirements**\n\nThe DynamoDB object has the following requirements:\n\n- For basic exports:\n  Required fields: region, method, tableName, expressionAttributeNames, expressionAttributeValues, keyConditionExpression\n  Optional fields: filterExpression, projectionExpression\n\n- For Once exports (when export.type=\"once\"):\n  Additional required field: onceExportPartitionKey\n  Optional field: onceExportSortKey (for composite keys)\n","properties":{"region":{"type":"string","enum":["us-east-1","us-east-2","us-west-1","us-west-2","af-south-1","ap-east-1","ap-south-1","ap-northeast-1","ap-northeast-2","ap-northeast-3","ap-southeast-1","ap-southeast-2","ca-central-1","eu-central-1","eu-west-1","eu-west-2","eu-west-3","eu-south-1","eu-north-1","me-south-1","sa-east-1"],"description":"Specifies the AWS region where the DynamoDB table is located.\n\n**Field behavior**\n\nThis field determines where to connect to DynamoDB:\n\n- REQUIRED for all DynamoDB exports\n- Must match the region where your DynamoDB table is deployed\n- Select the same AWS region used in your database configuration\n- Ensures the integration can access your table\n","default":"us-east-1"},"method":{"type":"string","enum":["query"],"description":"Defines the DynamoDB operation method used to retrieve data.\n\n**Field behavior**\n\n- REQUIRED for all DynamoDB exports\n- Currently only supports \"query\" operations\n- Always set this value to \"query\"\n- Additional methods may be supported in future versions\n"},"tableName":{"type":"string","description":"Specifies the DynamoDB table from which to retrieve data.\n\n**Field behavior**\n\nThis field identifies the data source:\n\n- REQUIRED for all DynamoDB exports\n- Must be an exact match to an existing table name\n- Case-sensitive as per AWS naming conventions\n- Cannot be changed without recreating the export\n\n**Implementation patterns**\n\n**Standard Table Names**\n```\n\"tableName\": \"Customers\"\n```\n"},"keyConditionExpression":{"type":"string","description":"Defines the search criteria to determine which items to retrieve from DynamoDB.\n\n**Field behavior**\n\n- REQUIRED when method=\"query\"\n- Must include a condition on the partition key\n- Can optionally include conditions on the sort key\n- Uses placeholders defined in expressionAttributeNames and expressionAttributeValues\n\n**Common patterns**\n\n```\n\"#pk = :pkValue\"                                  // Partition key only\n\"#pk = :pkValue AND #sk = :skValue\"               // Exact match on partition and sort key\n\"#pk = :pkValue AND #sk BETWEEN :start AND :end\"  // Range query on sort key\n\"#pk = :pkValue AND begins_with(#sk, :prefix)\"    // Prefix match on sort key\n```\n\nPlaceholders with '#' reference attribute names, while ':' reference values.\n"},"filterExpression":{"type":"string","description":"Filters the results from a query based on non-key attributes.\n\n**Field behavior**\n\n- OPTIONAL: If omitted, all items matching the key condition are returned\n- Applied after the key condition but before returning results\n- Can reference any non-key attributes to further refine results\n- Uses placeholders defined in expressionAttributeNames and expressionAttributeValues\n\n**Examples**\n\n```\n\"#status = :active\"\n\"#status = :active AND #price > :minPrice\"\n\"contains(#tags, :tagValue)\"\n```\n\nRefer to the DynamoDB documentation for the complete list of valid operators and syntax.\n"},"projectionExpression":{"type":"array","items":{"type":"string"},"description":"Specifies which fields to return from each item in the results.\n\n**Field behavior**\n\n- OPTIONAL: If omitted, all fields are returned\n- Each array element represents a field to include\n- References attribute names defined in expressionAttributeNames\n- Reduces data transfer by returning only needed fields\n\n**Examples**\n\n```\n[\"#id\", \"#name\", \"#email\"]               // Basic fields\n[\"#id\", \"#profile.#firstName\"]           // Nested fields\n[\"#id\", \"#items[0]\", \"#items[1]\"]        // List elements\n```\n\nRefer to the DynamoDB documentation for more details on projection syntax.\n"},"expressionAttributeNames":{"type":"string","description":"Defines placeholders for attribute names used in expressions.\n\n**Field behavior**\n\n- REQUIRED when using expressions that reference attribute names\n- Must be a valid JSON string mapping placeholders to actual attribute names\n- Each placeholder must begin with a pound sign (#) followed by alphanumeric characters\n- Used in keyConditionExpression, filterExpression, and projectionExpression\n\n**Example**\n\n```\n\"{\\\"#pk\\\": \\\"customerId\\\", \\\"#status\\\": \\\"status\\\"}\"\n```\n\nThis maps the placeholder #pk to the actual attribute name \"customerId\" and #status to \"status\".\n\nRefer to the DynamoDB documentation for more details.\n"},"expressionAttributeValues":{"type":"string","description":"Defines placeholder values used in expressions for comparison.\n\n**Field behavior**\n\n- REQUIRED when using expressions that compare attribute values\n- Must be a valid JSON string mapping placeholders to actual values\n- Each placeholder must begin with a colon (:) followed by alphanumeric characters\n- Used in keyConditionExpression and filterExpression\n- Can contain static values or dynamic values with handlebars syntax\n\n**Example**\n\n```\n\"{\\\":customerId\\\": \\\"12345\\\", \\\":status\\\": \\\"ACTIVE\\\"}\"\n```\n\nThis maps the placeholder :customerId to the value \"12345\" and :status to \"ACTIVE\".\n\nRefer to the DynamoDB documentation for more details.\n"},"onceExportPartitionKey":{"type":"string","description":"Specifies the partition key attribute for identifying items in once exports.\n\n**Field behavior**\n\n- REQUIRED when export.type=\"once\"\n- Must specify the primary key that uniquely identifies each item in the table\n- Celigo uses this to track which items have been processed\n- After successful export, Celigo updates a tracking field in the database\n\nThis is needed for once exports to prevent duplicate processing of the same items\nin subsequent runs by marking them as processed.\n\nRefer to the DynamoDB documentation for more details on partition keys.\n"},"onceExportSortKey":{"type":"string","description":"Specifies the sort key attribute for identifying items in composite key tables.\n\n**Field behavior**\n\n- OPTIONAL: Only needed for tables with composite primary keys\n- Used together with onceExportPartitionKey for tables where items are identified by both keys\n- Celigo uses both keys to uniquely identify items that have been processed\n- For tables with only a partition key (simple primary key), leave this empty\n\nThis is only required if your DynamoDB table uses a composite primary key\n(partition key + sort key) to uniquely identify items.\n\nRefer to the DynamoDB documentation for more details on sort keys.\n"}}},"FTP":{"type":"object","description":"Configuration object for FTP/SFTP connection settings in export integrations.\n\nThis object is REQUIRED when the _connectionId field references an FTP/SFTP connection\nand must not be included for other connection types. It defines how to locate and retrieve\nfiles from FTP, FTPS, or SFTP servers.\n\nThe FTP export object has the following requirements:\n\n- Required fields: directoryPath\n- Optional fields: fileNameStartsWith, fileNameEndsWith, backupDirectoryPath, _tpConnectorId\n\n**Purpose**\n\nThis configuration specifies:\n- Which directory to retrieve files from\n- How to filter files by name patterns\n- Where to move files after retrieval (optional)\n- Any trading partner-specific connection settings\n","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"References a Trading Partner Connector for standardized B2B integrations.\n\n**Field behavior**\n\nThis field links to pre-configured trading partner settings:\n\n- OPTIONAL: If omitted, uses only the FTP connection details\n- References a Celigo Trading Partner Connector by _id\n- When specified, inherits partner-specific configurations\n"},"directoryPath":{"type":"string","description":"Directory on the FTP/SFTP server to retrieve files from.\n\n- REQUIRED for all FTP exports\n- Can be relative to login directory or absolute path\n- Supports handlebars templates (e.g., `archive/{{date 'YYYY-MM-DD'}}`)\n- Use forward slashes (/) regardless of server OS\n- Path is case-sensitive on UNIX/Linux servers\n\nIMPORTANT: The FTP user must have read permissions on this directory.\n"},"fileNameStartsWith":{"type":"string","description":"Optional prefix filter for filenames.\n\n- Filters files based on starting characters\n- Case-sensitive on most FTP servers\n- Can use static text or handlebars templates\n- Examples:\n  - `\"ORDER_\"` - matches ORDER_123.csv but not order_123.csv\n  - `\"INV_{{date 'YYYYMMDD'}}\"` - matches current date's invoices\n\nWhen used with fileNameEndsWith, files must match both criteria.\n"},"fileNameEndsWith":{"type":"string","description":"Optional suffix filter for filenames.\n\n- Commonly used to filter by file extension\n- Case-sensitive on most FTP servers\n- Examples:\n  - `\".csv\"` - retrieves only CSV files\n  - `\"_FINAL.xml\"` - retrieves only XML files with _FINAL suffix\n  - `\"_READY\"` - retrieves files with status indicator\n\nWhen used with fileNameStartsWith, files must match both criteria.\n"},"backupDirectoryPath":{"type":"string","description":"Optional directory where files are moved before deletion.\n\n- If omitted, files are deleted from the original location after successful export\n- Must be on the same FTP/SFTP server\n- Supports static paths or handlebars templates\n- Examples:\n  - `\"processed\"` - simple archive folder\n  - `\"archive/{{date 'YYYY/MM/DD'}}\"` - date-based hierarchy\n\nIMPORTANT: Celigo automatically deletes files from the source directory after\nsuccessful export. The backup directory is for users who want to maintain their\nown independent backup of exported files. Celigo also maintains its own backup\nof processed files for a set period of time.\n"}}},"JDBC":{"type":"object","description":"Configuration object for JDBC (Java Database Connectivity) data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a JDBC database connection\nand must not be included for other connection types. It defines how data is extracted\nfrom relational databases using SQL queries.\n\n**Jdbc export capabilities**\n- Execute custom SQL SELECT statements\n- Support for joins, aggregations, and functions\n- Flexible data retrieval from any accessible tables or views\n- Compatible with all major database systems\n**Critical:** WHAT BELONGS IN THIS OBJECT\n- `query` - **ALWAYS REQUIRED** - The SQL SELECT statement\n- `once` - **REQUIRED** when the export's Object Type is `\"once\"` (set _include_once: true)\n- **DO NOT** put `delta` inside this object - delta is handled via the query\n\n**Delta exports (type: \"delta\")**\nFor delta/incremental exports, do NOT populate a `delta` object inside `jdbc`.\nInstead, use `{{lastExportDateTime}}` or `{{currentExportDateTime}}` directly in the query:\n```json\n{\n  \"type\": \"delta\",\n  \"jdbc\": {\n    \"query\": \"SELECT * FROM customers WHERE updatedAt > {{lastExportDateTime}}\"\n  }\n}\n```\n\n**Once exports (type: \"once\")**\nFor once exports (mark records as processed), populate `jdbc.once.query`:\n```json\n{\n  \"type\": \"once\",\n  \"jdbc\": {\n    \"query\": \"SELECT * FROM customers WHERE exported = false\",\n    \"once\": {\n      \"query\": \"UPDATE customers SET exported = true WHERE id = {{record.id}}\"\n    }\n  }\n}\n```\n\n**Standard exports (type: null or not specified)**\nJust provide the query:\n```json\n{\n  \"jdbc\": {\n    \"query\": \"SELECT * FROM customers WHERE status = 'ACTIVE'\"\n  }\n}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - SQL SELECT query for retrieving data from the database.\n\nThis field contains the SQL SELECT statement that will be executed to fetch data\nfrom the database. The query can range from simple table selections to complex\njoins and aggregations.\n\nExamples:\n- Basic: `SELECT id, name, email FROM customers WHERE status = 'ACTIVE'`\n- Join: `SELECT o.id, c.name, o.amount FROM orders o JOIN customers c ON o.customer_id = c.id`\n- Aggregate: `SELECT category, COUNT(*) as count FROM orders GROUP BY category`\n- Parameterized: `SELECT * FROM orders WHERE customer_id = {{record.customer_id}}`\n\n**For delta exports (when top-level type is \"delta\")**\nInclude `{{lastExportDateTime}}` or `{{currentExportDateTime}}` in the WHERE clause:\n- `SELECT * FROM customers WHERE updatedAt > {{lastExportDateTime}}`\n- `SELECT * FROM orders WHERE modified_date >= {{lastExportDateTime}}`\n"},"once":{"type":"object","description":"**CRITICAL: REQUIRED when the export's Object Type is \"once\".**\n\nIf Object Type is \"once\", you MUST set _include_once to true (or include this object).\n\nThis object has ONLY ONE property: \"query\" (a SQL UPDATE string).\nDO NOT create any other properties like \"update\", \"table\", \"set\", \"where\", etc.\n\nCORRECT format:\n```json\n{\"query\": \"UPDATE customers SET exported=true WHERE id={{record.id}}\"}\n```\n\nWRONG format (DO NOT DO THIS):\n```json\n{\"update\": {\"table\": \"customers\", \"set\": {...}}}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - A SQL UPDATE statement string to mark records as processed.\n\nThis is a plain SQL UPDATE query string, NOT a structured object.\n\nThe query runs AFTER each record is successfully exported, setting a flag\nto indicate the record has been processed.\n\nFormat: \"UPDATE <table> SET <column>=<value> WHERE <id_column>={{record.<id_field>}}\"\n\nExample: \"UPDATE customers SET exported=true WHERE id={{record.id}}\"\n\nThe {{record.id}} placeholder is replaced with the actual record ID from each exported row.\n"}}}}},"MongoDB":{"type":"object","description":"Configuration object for MongoDB data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a MongoDB connection\nand must not be included for other connection types. It defines how documents are\nretrieved from MongoDB collections for processing in integrations.\n\nMongoDB exports currently support the following operational modes:\n- Retrieves documents from specified collections\n- Filters documents based on query criteria\n- Selects specific fields with projections\n- Provides NoSQL flexibility with JSON query syntax\n","properties":{"method":{"type":"string","enum":["find"],"description":"Specifies the MongoDB operation to perform when retrieving data.\n\n**Field behavior**\n\nThis field defines the query approach:\n\n- REQUIRED for all MongoDB exports\n- Currently only supports \"find\" operations\n- Determines how other parameters are interpreted\n- Corresponds to MongoDB's db.collection.find() method\n- Future versions may support additional methods\n\n**Query method types**\n\n**Find Method**\n```\n\"method\": \"find\"\n```\n\n- **Behavior**: Retrieves documents from a collection based on filter criteria\n- **MongoDB Equivalent**: db.collection.find(filter, projection)\n- **Required Parameters**: collection\n- **Optional Parameters**: filter, projection\n- **Use Cases**: Standard document retrieval, filtered queries, field selection\n\n**Technical considerations**\n\nThe method selection influences:\n- What other fields must be provided\n- How the query will be executed against MongoDB\n- What indexing strategies should be applied\n- Performance characteristics of the operation\n\nIMPORTANT: While only \"find\" is currently supported, the schema is designed\nfor future expansion to include other MongoDB operations like \"aggregate\"\nfor more complex data transformations and aggregations.\n"},"collection":{"type":"string","description":"Specifies the MongoDB collection to query for documents.\n\n**Field behavior**\n\nThis field identifies the data source:\n\n- REQUIRED for all MongoDB exports\n- Must reference a valid collection in the MongoDB database\n- Case-sensitive according to MongoDB collection naming\n- The primary container for documents to be retrieved\n"},"filter":{"type":"string","description":"Defines query criteria for selecting documents from the collection.\n\n**Field behavior**\n\nThis field narrows document selection:\n\n- OPTIONAL: If omitted, all documents in the collection are returned\n- Contains a MongoDB query document as a JSON string\n- Supports all standard MongoDB query operators\n- Provides precise control over which documents are retrieved\n\n**Query patterns**\n\n**Simple Equality Query**\n```\n\"filter\": \"{\"status\": \"active\"}\"\n```\n\n- **Behavior**: Returns only documents where status equals \"active\"\n- **MongoDB Equivalent**: db.collection.find({\"status\": \"active\"})\n- **Matching Documents**: {\"_id\": 1, \"status\": \"active\", \"name\": \"Example\"}\n- **Use Cases**: Status filtering, category selection, type filtering\n\n**Comparison Operator Query**\n```\n\"filter\": \"{\"createdDate\": {\"$gt\": \"2023-01-01T00:00:00Z\"}}\"\n```\n\n- **Behavior**: Returns documents created after January 1, 2023\n- **MongoDB Equivalent**: db.collection.find({\"createdDate\": {\"$gt\": \"2023-01-01T00:00:00Z\"}})\n- **Operators**: $eq, $gt, $gte, $lt, $lte, $ne, $in, $nin\n- **Use Cases**: Date ranges, numeric thresholds, incremental processing\n\n**Logical Operator Query**\n```\n\"filter\": \"{\"$or\": [{\"status\": \"pending\"}, {\"status\": \"processing\"}]}\"\n```\n\n- **Behavior**: Returns documents with either pending or processing status\n- **MongoDB Equivalent**: db.collection.find({\"$or\": [{\"status\": \"pending\"}, {\"status\": \"processing\"}]})\n- **Operators**: $and, $or, $nor, $not\n- **Use Cases**: Multiple conditions, alternative criteria, complex filtering\n\n**Nested Document Query**\n```\n\"filter\": \"{\"address.country\": \"USA\"}\"\n```\n\n- **Behavior**: Returns documents where the nested country field equals \"USA\"\n- **MongoDB Equivalent**: db.collection.find({\"address.country\": \"USA\"})\n- **Dot Notation**: Accesses nested document fields\n- **Use Cases**: Nested data filtering, object property matching\n\n**Handlebars Template Query**\n```\n\"filter\": \"{\"customerId\": \"{{record.customer_id}}\", \"status\": \"{{record.status}}\"}\"\n```\n\n- **Behavior**: Dynamically filters based on record field values\n- **MongoDB Equivalent**: db.collection.find({\"customerId\": \"123\", \"status\": \"active\"})\n- **Template Variables**: Values replaced at runtime with actual record data\n- **Use Cases**: Dynamic filtering, context-aware queries, relational lookups\n\n**Incremental Processing Query**\n```\n\"filter\": \"{\"lastModified\": {\"$gt\": \"{{lastRun}}\"}}\"\n```\n\n- **Behavior**: Returns only documents modified since last execution\n- **MongoDB Equivalent**: db.collection.find({\"lastModified\": {\"$gt\": \"2023-06-15T10:30:00Z\"}})\n- **System Variables**: {{lastRun}} replaced with timestamp of previous execution\n- **Use Cases**: Change data capture, delta synchronization, incremental updates\n"},"projection":{"type":"string","description":"Controls which fields are included or excluded from returned documents.\n\n**Field behavior**\n\nThis field optimizes data retrieval:\n\n- OPTIONAL: If omitted, all fields are returned\n- Contains a MongoDB projection document as a JSON string\n- Can include fields (1) or exclude fields (0), but not both (except _id)\n- Helps minimize data transfer by selecting only needed fields\n\n**Projection patterns**\n\n**Field Inclusion Projection**\n```\n\"projection\": \"{\"name\": 1, \"email\": 1, \"_id\": 0}\"\n```\n\n- **Behavior**: Returns only name and email fields, excludes _id\n- **MongoDB Equivalent**: db.collection.find({}, {\"name\": 1, \"email\": 1, \"_id\": 0})\n- **Result Format**: {\"name\": \"Example\", \"email\": \"user@example.com\"}\n- **Use Cases**: Specific field selection, minimizing payload size\n\n**Field Exclusion Projection**\n```\n\"projection\": \"{\"password\": 0, \"internal_notes\": 0}\"\n```\n\n- **Behavior**: Returns all fields except password and internal_notes\n- **MongoDB Equivalent**: db.collection.find({}, {\"password\": 0, \"internal_notes\": 0})\n- **Result Impact**: Removes sensitive or unnecessary fields\n- **Use Cases**: Security filtering, removing large fields, data protection\n\n**Nested Field Projection**\n```\n\"projection\": \"{\"profile.firstName\": 1, \"profile.lastName\": 1, \"orders\": 1, \"_id\": 0}\"\n```\n\n- **Behavior**: Returns only specific nested fields and the orders array\n- **MongoDB Equivalent**: db.collection.find({}, {\"profile.firstName\": 1, \"profile.lastName\": 1, \"orders\": 1, \"_id\": 0})\n- **Dot Notation**: Accesses specific nested document fields\n- **Use Cases**: Partial nested document selection, specific array inclusion\n\n**Technical considerations**\n\n- Maximum size: 128KB\n- Must be a valid JSON string representing a MongoDB projection\n- Cannot mix inclusion and exclusion modes (except _id field)\n- _id field is included by default unless explicitly excluded\n- Projection does not affect which documents are returned, only their fields\n\nIMPORTANT: When working with nested documents or arrays, be aware that including\na specific field path does not automatically include parent documents or arrays.\nFor example, including \"addresses.zipcode\" will only return that specific field,\nnot the entire addresses array or documents within it.\n"}}},"NetSuite":{"type":"object","description":"Configuration object for NetSuite data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a NetSuite connection\nand must not be included for other connection types. It defines how data is extracted\nfrom NetSuite, including saved searches, RESTlets, and distributed/SuiteApp exports.\n\n**Netsuite export modes**\n\nNetSuite exports support several operating modes:\n\n1. **Saved Search Exports** - Uses NetSuite saved searches to retrieve data\n2. **RESTlet Exports** - Uses custom RESTlet scripts for data retrieval\n3. **Distributed Exports** - Uses SuiteApp for real-time or batch processing\n4. **Blob Exports** - Retrieves files from the NetSuite file cabinet and transfers them WITHOUT parsing them into records (raw binary transfer)\n5. **File Exports** - Retrieves files from the NetSuite file cabinet and PARSES them into records (CSV, XML, JSON, etc.)\n\n**Critical:** Blob vs File Export Configuration\n\nThe export `type` field at the top level determines whether file content is parsed:\n\n- **For Blob Exports (no parsing)**: Set the export's `type: \"blob\"` AND configure `netsuite.blob`\n- **For File Exports (with parsing)**: Leave the export's `type` as null/undefined AND configure `netsuite.file`\n\nDo NOT set `type: \"blob\"` when you want file content parsed into records. The \"blob\" type is specifically for raw file transfers without any parsing.\n\n**Implementation requirements**\n\n- For saved search exports: Configure the `searches` or `type` properties\n- For RESTlet exports: Configure the `restlet` property with script details\n- For distributed exports: Configure the `distributed` property\n- For blob exports (no parsing): Set export `type: \"blob\"` and configure `netsuite.blob`\n- For file exports (with parsing): Leave export `type` null and configure `netsuite.file`\n","properties":{"type":{"type":"string","enum":["search","basicSearch","metadata","selectoption","restlet","getList","getServerTime","distributed","file"],"description":"Specifies the NetSuite export operation type. This determines how data is retrieved from NetSuite.\n\n**Critical:** File exports vs Blob exports\n\n- **File exports (with parsing)**: Set netsuite.type to \"file\" and configure netsuite.file.folderInternalId\n- **Blob exports (raw transfer, no parsing)**: Leave netsuite.type BLANK/null, set the export's top-level type to \"blob\", and configure netsuite.internalId\n\nDo NOT set netsuite.type to \"file\" for blob exports. For blob exports, this property should be omitted or null.\n\n**Recommended types**\n\n- **For Lookups (isLookup: true)**:\n    - **PREFER \"restlet\"**: This allows you to use `suiteapp2.0` saved searches with dynamic inputs easily.\n    - **AVOID \"search\"**: Standard search type is often limited for dynamic lookups.\n\n**Valid values**\n- \"search\" - Use a saved search to retrieve records\n- \"basicSearch\" - Use a basic search query\n- \"metadata\" - Retrieve record metadata\n- \"selectoption\" - Retrieve select options for a field\n- \"restlet\" - Use a RESTlet for custom data retrieval\n- \"getList\" - Retrieve a list of records by internal IDs\n- \"getServerTime\" - Get the NetSuite server time\n- \"distributed\" - Use distributed/SuiteApp for real-time exports\n- \"file\" - Export files from NetSuite file cabinet WITH parsing into records\n- null/omitted - For blob exports or other export types\n\n**Implementation guidance**\n- For file exports WITH parsing: Set netsuite.type to \"file\" and configure netsuite.file.folderInternalId\n- For blob exports (no parsing): Leave netsuite.type blank, set export type to \"blob\", configure netsuite.internalId\n- For saved search exports: Set type to \"search\" and configure netsuite.searches\n- For RESTlet exports: Set type to \"restlet\" and configure netsuite.restlet\n- For distributed/real-time exports: Set type to \"distributed\" and configure netsuite.distributed\n\n**Examples**\n- \"file\" - For file cabinet exports with parsing\n- \"search\" - For saved search exports\n- null - For blob exports (raw file transfer without parsing)"},"searches":{"type":"array","description":"An array of search configurations used to query and retrieve data from NetSuite.\nEach search object defines a saved search or ad-hoc query configuration.\n\n**Structure**\nEach item in the array is an object with the following properties:\n- savedSearchId: The internal ID of a saved search in NetSuite (string)\n- recordType: The NetSuite record type being searched (string, e.g., \"customer\", \"salesorder\")\n- criteria: Array of search criteria/filters (optional)\n\n**Examples**\n```json\n[\n  {\n    \"savedSearchId\": \"10\",\n    \"recordType\": \"customer\",\n    \"criteria\": []\n  }\n]\n```\n\n**Implementation guidance**\n- Use savedSearchId to reference an existing saved search in NetSuite\n- recordType should match a valid NetSuite record type\n- criteria can be used to add additional filters to the search","items":{"type":"object","properties":{"savedSearchId":{"type":"string","description":"The internal ID of a saved search in NetSuite"},"recordType":{"type":"string","description":"The lowercase script ID of the NetSuite record type being searched.\n\nMust be the exact lowercase script ID as defined in NetSuite (e.g., \"customer\", \"salesorder\", \"invoice\", \"vendorbill\").\nThis is NOT the display name - use the script ID which is always lowercase with no spaces."},"criteria":{"type":"array","description":"Array of search criteria/filters to apply","items":{"type":"object"}}}}},"metadata":{"type":"object","description":"metadata: >\n  A collection of key-value pairs that provide additional contextual information about the NetSuite entity. This metadata can include custom attributes, tags, or any supplementary data that helps to describe, categorize, or operationally enhance the entity beyond its standard properties. It serves as an extensible mechanism to store user-defined or system-generated information that is not part of the core entity schema, enabling greater flexibility and customization in managing NetSuite data.\n  **Field behavior**\n  - Stores arbitrary additional information related to the NetSuite entity, enhancing its descriptive or operational context.\n  - Can include custom fields defined by users, system-generated tags, flags, timestamps, or nested structured data.\n  - Typically represented as a dictionary or map with string keys and values that may be strings, numbers, booleans, arrays, or nested objects.\n  - Metadata entries are optional and do not affect the core entity behavior unless explicitly integrated with business logic.\n  - Supports dynamic addition, update, and removal of metadata entries without impacting the primary entity schema.\n  **Implementation guidance**\n  - Ensure all metadata keys are unique within the collection to prevent accidental overwrites.\n  - Support flexible and heterogeneous value types, including primitive types and nested structures, to accommodate diverse metadata needs.\n  - Validate keys and values against naming conventions, length restrictions, and allowed character sets to maintain consistency and prevent errors.\n  - Implement efficient mechanisms for CRUD (Create, Read, Update, Delete) operations on metadata entries to facilitate easy management.\n  - Consider indexing frequently queried metadata keys for performance optimization.\n  - Provide clear documentation or schema definitions for any standardized or commonly used metadata keys.\n  **Examples**\n  - {\"department\": \"Sales\", \"region\": \"EMEA\", \"priority\": \"high\"}\n  - {\"customField1\": \"value1\", \"customFlag\": true}\n  - {\"tags\": [\"urgent\", \"review\"], \"lastUpdatedBy\": \"user123\"}\n  - {\"approvalStatus\": \"pending\", \"reviewCount\": 3, \"metadataVersion\": 2}\n  - {\"nestedInfo\": {\"createdBy\": \"admin\", \"createdAt\": \"2024-05-01T12:00:00Z\"}}\n  **Important notes**\n  - Metadata is optional and should not interfere with the core functionality or validation of the NetSuite entity.\n  - Modifications to metadata typically do not trigger business workflows or logic unless explicitly configured to do so."},"selectoption":{"type":"object","description":"selectoption: >\n  Represents a selectable option within a NetSuite field, typically used in dropdown menus, radio buttons, or other selection controls. Each selectoption consists of a user-friendly label and an associated value that uniquely identifies the option internally. This structure enables consistent data entry, filtering, and categorization within NetSuite forms and records.\n  **Field behavior**\n  - Defines a single, discrete choice available to users in selection interfaces such as dropdowns, radio buttons, or multi-select lists.\n  - Can be part of a collection of options presented to the user for making a selection.\n  - Includes both a display label (visible to users) and a corresponding value (used internally or in API interactions).\n  - Supports filtering, categorization, and conditional logic based on the selected option.\n  - May be dynamically generated or statically defined depending on the field configuration.\n  **Implementation guidance**\n  - Assign a unique and stable value to each selectoption to prevent ambiguity and maintain data integrity.\n  - Use clear, concise, and user-friendly labels that accurately describe the option’s meaning.\n  - Validate option values against expected data types and formats to ensure compatibility with backend processing.\n  - Implement localization strategies for labels to support multiple languages without altering the underlying values.\n  - Consistently apply selectoption structures across all fields requiring predefined choices to standardize user experience.\n  - Consider accessibility best practices when designing labels and selection controls.\n  **Examples**\n  - { label: \"Active\", value: \"1\" }\n  - { label: \"Inactive\", value: \"2\" }\n  - { label: \"Pending Approval\", value: \"3\" }\n  - { label: \"High Priority\", value: \"high\" }\n  - { label: \"Low Priority\", value: \"low\" }\n  **Important notes**\n  - The label is intended for display purposes and may be localized; the value is the definitive identifier used in data processing and API calls.\n  - Values should remain consistent over time to avoid breaking integrations or corrupting data.\n  - When supporting multiple languages, labels should be translated appropriately while keeping values unchanged.\n  - Changes to selectoption values or labels should be managed carefully to prevent unintended side effects.\n  - Selectoption entries may be influenced by the context of the parent record, user roles, or permissions.\n  **Dependency chain**\n  - Utilized within field definitions that support selection inputs (e"},"customFieldMetadata":{"type":"object","description":"customFieldMetadata: Metadata information related to custom fields defined within the NetSuite environment, providing comprehensive details about each custom field's configuration, behavior, and constraints to facilitate accurate data handling and UI generation.\n**Field behavior**\n- Contains detailed metadata about custom fields, including their definitions, types, configurations, and constraints.\n- Provides contextual information necessary for understanding, validating, and manipulating custom fields programmatically.\n- May include attributes such as field ID, label, data type, default values, validation rules, display settings, sourcing information, and field dependencies.\n- Used to dynamically interpret or generate UI elements, data validation logic, or data structures based on custom field configurations.\n- Reflects the current state of custom fields as defined in the NetSuite account, enabling synchronization between the API consumer and the NetSuite environment.\n**Implementation guidance**\n- Ensure that the metadata accurately reflects the current state of custom fields in the NetSuite account by synchronizing regularly or on configuration changes.\n- Update the metadata whenever custom fields are added, modified, or removed to maintain consistency and prevent data integrity issues.\n- Use this metadata to validate input data against custom field constraints (e.g., data type, required status, allowed values) before processing or submission.\n- Consider caching metadata for performance optimization but implement mechanisms to refresh it periodically or on-demand to capture updates.\n- Handle cases where customFieldMetadata might be null, incomplete, or partially loaded gracefully, including fallback logic or error handling.\n- Respect user permissions and access controls when retrieving or exposing custom field metadata to ensure compliance with security policies.\n**Examples**\n- A custom field metadata object describing a custom checkbox field with ID \"custfield_123\", label \"Approved\", default value false, and display type \"inline\".\n- Metadata for a custom list/record field specifying the list of valid options, their internal IDs, and whether multiple selections are allowed.\n- Information about a custom date field including its date format, minimum and maximum allowed dates, and any validation rules applied.\n- Metadata describing a custom currency field with precision settings and default currency.\n**Important notes**\n- The structure and content of customFieldMetadata may vary depending on the NetSuite configuration, customizations, and API version.\n- Access to custom field metadata may require appropriate permissions within the NetSuite environment; unauthorized access may result in incomplete or no metadata.\n- Changes to custom fields in NetSuite (such as renaming, deleting, or changing data types) can impact the metadata and"},"skipGrouping":{"type":"boolean","description":"skipGrouping: Indicates whether to bypass the grouping of related records or transactions during processing, allowing each item to be handled individually rather than aggregated into groups.\n\n**Field behavior**\n- When set to true, the system processes each record or transaction independently, without combining them into groups based on shared attributes.\n- When set to false or omitted, related records or transactions are aggregated according to predefined grouping criteria (e.g., by customer, date, or transaction type) before processing.\n- Influences how data is structured, summarized, and reported in outputs or passed to downstream systems.\n- Affects the level of detail and granularity available in the processed data.\n\n**Implementation guidance**\n- Utilize this flag to control processing granularity, especially when detailed, record-level analysis or reporting is required.\n- Confirm that downstream systems, reports, or integrations can accommodate ungrouped data if skipGrouping is enabled.\n- Assess the potential impact on system performance and data volume, as disabling grouping may significantly increase the number of processed items.\n- Consider the use case carefully: grouping is generally preferred for summary reports, while skipping grouping suits detailed audits or troubleshooting.\n\n**Examples**\n- skipGrouping: true — Processes each transaction separately, providing detailed, unaggregated data.\n- skipGrouping: false — Groups transactions by customer or date, producing summarized results.\n- skipGrouping omitted — Defaults to grouping enabled, aggregating related records.\n\n**Important notes**\n- Enabling skipGrouping can lead to increased processing time, higher memory usage, and larger output datasets.\n- Some reports, dashboards, or integrations may require grouped data; verify compatibility before enabling this option.\n- The default behavior is typically grouping enabled (skipGrouping = false) unless explicitly overridden.\n- Changes to this setting may affect data consistency and comparability with previously generated reports.\n\n**Dependency chain**\n- Often depends on other properties that define grouping keys or criteria (e.g., groupBy fields).\n- May interact with filtering, sorting, or pagination settings within the processing pipeline.\n- Could influence or be influenced by aggregation functions or summary calculations applied downstream.\n\n**Technical details**\n- Data type: Boolean.\n- Default value: false (grouping enabled).\n- Implemented as a conditional flag checked during the data aggregation phase.\n- When true, bypasses aggregation logic and processes each record individually.\n- Typically integrated into the processing workflow to toggle between grouped and ungrouped data handling."},"statsOnly":{"type":"boolean","description":"statsOnly indicates whether the API response should include only aggregated statistical summary data without any detailed individual records. This property is used to optimize response size and improve performance when detailed data is unnecessary.\n\n**Field behavior**\n- When set to true, the API returns only summary statistics such as counts, averages, sums, or other aggregate metrics.\n- When set to false or omitted, the response includes both detailed data records and the associated statistical summaries.\n- Helps reduce network bandwidth and processing time by excluding verbose record-level data.\n- Primarily intended for use cases like dashboards, reports, or monitoring tools where only high-level metrics are required.\n\n**Implementation guidance**\n- Default the value to false to ensure full data retrieval unless explicitly requesting summary-only data.\n- Validate that the input is a boolean to prevent unexpected API behavior.\n- Use this flag selectively in scenarios where detailed records are not needed to avoid loss of critical information.\n- Ensure the API endpoint supports this flag before usage, as some endpoints may not implement statsOnly functionality.\n- Adjust client-side logic to handle different response structures depending on the flag’s value.\n\n**Examples**\n- statsOnly: true — returns only aggregated statistics such as total counts, averages, or sums without any detailed entries.\n- statsOnly: false — returns full detailed data records along with statistical summaries.\n- statsOnly omitted — defaults to false, returning detailed data and statistics.\n\n**Important notes**\n- Enabling statsOnly disables access to individual record details, which may limit in-depth data analysis.\n- The response schema changes significantly when statsOnly is true; clients must handle these differences gracefully.\n- Some API endpoints may not support this property; verify compatibility in the API documentation.\n- Pagination parameters may be ignored or behave differently when statsOnly is enabled, since detailed records are excluded.\n\n**Dependency chain**\n- May interact with filtering, sorting, or date range parameters that influence the statistical data returned.\n- Can affect pagination logic because detailed records are omitted when statsOnly is true.\n- Dependent on the API endpoint’s support for summary-only responses.\n\n**Technical details**\n- Data type: Boolean.\n- Default value: false.\n- Typically implemented as a query parameter or part of the request payload depending on API design.\n- Alters the response payload structure by excluding detailed record arrays and including only aggregated metrics.\n- Helps optimize API performance and reduce response payload size in scenarios where detailed data is unnecessary."},"internalId":{"type":"string","description":"The internal ID of a specific file in the NetSuite file cabinet to export.\n\n**Critical:** Required for blob exports\n\nThis property is REQUIRED when the export type is \"blob\". For blob exports, you must specify the internalId of the file to export from the NetSuite file cabinet.\n\n**Field behavior**\n- Identifies a specific file in the NetSuite file cabinet by its internal ID\n- Required for blob exports (raw binary file transfers without parsing)\n- The file at this internal ID will be exported as-is without parsing\n\n**Implementation guidance**\n- For blob exports: Set netsuite.type to \"blob\" (on the export, not netsuite) and provide netsuite.internalId\n- Obtain the file's internalId from NetSuite's file cabinet or via API\n- Validate the internalId corresponds to an existing file before export\n\n**Examples**\n- \"12345\" - Internal ID of a specific file\n- \"67890\" - Another file internal ID\n\n**Important notes**\n- This is different from netsuite.file.folderInternalId which specifies a folder for file exports with parsing\n- For blob exports: Use netsuite.internalId (file ID)\n- For file exports with parsing: Use netsuite.file.folderInternalId (folder ID)"},"blob":{"type":"object","properties":{"purgeFileAfterExport":{"type":"boolean","description":"purgeFileAfterExport: Whether to delete the file from the system after it has been successfully exported. This property controls the automatic removal of the source file post-export to help manage storage and maintain system cleanliness.\n\n**Field behavior**\n- Determines if the exported file should be removed from the source storage immediately after a successful export operation.\n- When set to `true`, the system deletes the file as soon as the export completes without errors.\n- When set to `false` or omitted, the file remains intact in its original location after export.\n- Facilitates automated cleanup of files to prevent unnecessary storage consumption.\n- Does not affect the export process itself; deletion occurs only after confirming export success.\n\n**Implementation guidance**\n- Confirm that the export operation has fully completed and succeeded before initiating file deletion to avoid data loss.\n- Verify that the executing user or system has sufficient permissions to delete files from the source location.\n- Assess downstream workflows or processes that might require access to the file after export before enabling purging.\n- Implement logging or notification mechanisms to record when files are purged for audit trails and troubleshooting.\n- Consider integrating with retention policies or backup systems to prevent accidental loss of important data.\n\n**Examples**\n- `true` — The file will be deleted immediately after a successful export.\n- `false` — The file will remain in the source location after export.\n- Omitted or `null` — Defaults to `false`, meaning the file is retained post-export.\n\n**Important notes**\n- File deletion is permanent and cannot be undone; ensure that the file is no longer needed before enabling this option.\n- Use caution in multi-user or multi-process environments where files may be shared or required beyond the export operation.\n- Immediate purging may interfere with backup, archival, or compliance requirements if files are deleted too soon.\n- Consider implementing safeguards or confirmation steps if enabling automatic purging in production environments.\n\n**Dependency chain**\n- Relies on the successful completion of the export operation to trigger file deletion.\n- Dependent on file system permissions and access controls to allow deletion.\n- May be affected by other system settings related to file retention, archival, or cleanup policies.\n- Could interact with error handling mechanisms to prevent deletion if export fails or is incomplete.\n\n**Technical details**\n- Typically represented as a boolean value (`true` or `false`).\n- Default behavior is to retain files unless explicitly set to `true`.\n- Deletion should be performed using secure"}},"description":"blob: Configuration for retrieving raw binary files from NetSuite file cabinet WITHOUT parsing them into records. Use this for binary file transfers (images, PDFs, executables) where the file content should be transferred as-is.\n\n**Critical:** Blob export configuration\n\nFor blob exports, configure:\n1. Set the export's top-level `type` to \"blob\"\n2. Set `netsuite.internalId` to the file's internal ID\n3. Leave `netsuite.type` blank/null (do NOT set it to \"file\")\n4. Optionally configure `netsuite.blob.purgeFileAfterExport`\n\n**When to use blob vs file**\n- Blob exports: Raw binary transfer WITHOUT parsing - leave netsuite.type blank\n- File exports: Parse file contents into records - set netsuite.type to \"file\"\n\nDo NOT use blob configuration when you want file content parsed into data records.\n\n**Field behavior**\n- Stores raw binary data including files, images, audio, video, or any non-textual content.\n- Supports download operations for binary content from the NetSuite file cabinet.\n- File content is transferred as-is without any parsing or transformation.\n- May be immutable or mutable depending on the specific NetSuite entity and operation.\n- Requires careful handling to maintain data integrity during transmission and storage.\n\n**Implementation guidance**\n- Always encode binary data (e.g., using base64) when transmitting over text-based protocols such as JSON or XML to ensure data integrity.\n- Validate the size of the blob against NetSuite API limits and storage constraints to prevent errors or truncation.\n- Implement secure handling practices, including encryption in transit and at rest, to protect sensitive binary data.\n- Use appropriate MIME/content-type headers when uploading or downloading blobs to correctly identify the data format.\n- Consider chunked uploads/downloads or streaming for large blobs to optimize performance and resource usage.\n- Ensure consistent encoding and decoding mechanisms between client and server to avoid data corruption.\n\n**Examples**\n- A base64-encoded PDF document attached to a NetSuite customer record.\n- An image file (PNG or JPEG) stored as a blob for product catalog entries.\n- A binary export of transaction data in a proprietary format used for integration with external systems.\n- Audio or video files associated with marketing campaigns or training materials.\n- Encrypted binary blobs containing sensitive configuration or credential data.\n\n**Important notes**\n- Blob size may be limited by NetSuite API constraints or underlying storage capabilities; exceeding these limits can cause failures.\n- Encoding and decoding must be consistent and correctly implemented to prevent data corruption or loss.\n- Large blobs should be handled using chunked or streamed transfers to avoid memory issues and improve reliability.\n- Security is paramount; blobs may contain sensitive information requiring encryption and strict access controls.\n- Access to blob data typically requires proper authentication and authorization aligned with NetSuite’s security model.\n\n**Dependency chain**\n- Dependent on authentication and authorization mechanisms"},"restlet":{"type":"object","properties":{"recordType":{"type":"string","description":"recordType specifies the type of NetSuite record that the RESTlet will interact with. This property determines the schema, validation rules, and operations applicable to the record within the NetSuite environment, directly influencing how data is processed and managed by the RESTlet.\n\n**Field behavior**\n- Defines the specific NetSuite record type (e.g., customer, salesOrder, invoice) targeted by the RESTlet.\n- Influences the structure and format of data payloads sent to and received from the RESTlet.\n- Controls validation rules, mandatory fields, and available operations based on the selected record type.\n- Affects permissions and access controls enforced during RESTlet execution, ensuring compliance with NetSuite security settings.\n- Determines the applicable business logic and workflows triggered by the RESTlet for the specified record type.\n\n**Implementation guidance**\n- Use the exact internal ID or script ID of the NetSuite record type as recognized by the NetSuite system to ensure accurate targeting.\n- Validate the recordType value against the list of supported NetSuite record types to prevent runtime errors and ensure compatibility.\n- Confirm that the RESTlet script has the necessary permissions and roles assigned to access and manipulate the specified record type.\n- When handling multiple record types dynamically, implement conditional logic to accommodate differences in data structure and processing requirements.\n- For custom record types, always use the script ID format (e.g., \"customrecord_myCustomRecord\") to avoid ambiguity.\n- Test RESTlet behavior thoroughly after changing the recordType to verify correct handling of data and operations.\n\n**Examples**\n- \"customer\"\n- \"salesOrder\"\n- \"invoice\"\n- \"employee\"\n- \"customrecord_myCustomRecord\"\n- \"vendor\"\n- \"purchaseOrder\"\n\n**Important notes**\n- The recordType must correspond to a valid and supported NetSuite record type; invalid values will cause API calls to fail.\n- Custom record types require referencing by their script IDs, which typically start with \"customrecord_\".\n- Modifying the recordType may necessitate updates to the RESTlet’s codebase to handle different data schemas and business logic.\n- Permissions and role restrictions in NetSuite can limit access to certain record types, impacting RESTlet functionality.\n- Consistency in recordType usage is critical for maintaining data integrity and predictable RESTlet behavior.\n\n**Dependency chain**\n- Depends on the NetSuite environment’s available record types and their configurations.\n- Influences the RESTlet’s data validation, processing logic, and response formatting.\n-"},"searchId":{"type":"string","description":"searchId: The unique identifier for a saved search in NetSuite, used to specify which saved search the RESTlet should execute. This ID corresponds to the internal ID assigned to saved searches within the NetSuite system. It enables the RESTlet to run predefined queries and retrieve data based on the saved search’s criteria and configuration.\n\n**Field behavior**\n- Specifies the exact saved search to be executed by the RESTlet.\n- Must correspond to a valid and existing saved search internal ID within the NetSuite account.\n- Determines the dataset and filters applied when retrieving search results.\n- Typically required when invoking the RESTlet to perform search operations.\n- Influences the structure and content of the response based on the saved search definition.\n\n**Implementation guidance**\n- Verify that the searchId matches an existing saved search internal ID in the target NetSuite environment.\n- Validate the searchId format and existence before making the RESTlet call to prevent runtime errors.\n- Use the internal ID as a string or numeric value consistent with NetSuite’s conventions.\n- Implement error handling for scenarios where the searchId is invalid, missing, or inaccessible due to permission restrictions.\n- Ensure the integration role or user has appropriate permissions to access and execute the saved search.\n- Consider caching or documenting frequently used searchIds to improve maintainability.\n\n**Examples**\n- \"1234\" — a numeric internal ID representing a specific saved search.\n- \"5678\" — another valid saved search internal ID.\n- \"1001\" — an example of a saved search ID used to retrieve customer records.\n- \"2002\" — a saved search ID configured to return transaction data.\n\n**Important notes**\n- The searchId must be accessible by the user or integration role making the RESTlet call; otherwise, access will be denied.\n- Providing an incorrect or non-existent searchId will result in errors or empty search results.\n- Permissions and sharing settings on the saved search directly affect the data returned by the RESTlet.\n- The saved search must be properly configured with the desired filters, columns, and criteria to ensure meaningful results.\n- Changes to the saved search (e.g., modifying filters or columns) will impact the RESTlet output without changing the searchId.\n\n**Dependency chain**\n- Depends on the existence of a saved search configured in the NetSuite account.\n- Requires appropriate user or integration role permissions to access the saved search.\n- Relies on the saved search’s configuration (filters, columns, criteria) to"},"useSS2Restlets":{"type":"boolean","description":"useSS2Restlets: >\n  Specifies whether to use SuiteScript 2.0 RESTlets for API interactions instead of SuiteScript 1.0 RESTlets. This setting controls the version of RESTlets invoked during API communication with NetSuite, impacting compatibility, performance, and available features.\n  **Field behavior**\n  - Determines the RESTlet version used for all API interactions within the NetSuite integration.\n  - When set to `true`, the system exclusively uses SuiteScript 2.0 RESTlets.\n  - When set to `false` or omitted, SuiteScript 1.0 RESTlets are used by default.\n  - Influences the structure, capabilities, and response formats of API calls.\n  **Implementation guidance**\n  - Enable this flag to take advantage of SuiteScript 2.0’s improved modularity, asynchronous capabilities, and modern JavaScript syntax.\n  - Verify that SuiteScript 2.0 RESTlets are properly deployed, configured, and accessible in the target NetSuite environment before enabling.\n  - Conduct comprehensive testing to ensure existing integrations and workflows remain functional when switching from SuiteScript 1.0 to 2.0 RESTlets.\n  - Coordinate with NetSuite administrators and developers to update or rewrite RESTlets if necessary.\n  **Examples**\n  - `true` — API calls will utilize SuiteScript 2.0 RESTlets, enabling modern scripting features.\n  - `false` — API calls will continue using legacy SuiteScript 1.0 RESTlets for backward compatibility.\n  **Important notes**\n  - SuiteScript 2.0 RESTlets support modular script architecture and ES6+ JavaScript features, improving maintainability and performance.\n  - Legacy RESTlets written in SuiteScript 1.0 may not be compatible with SuiteScript 2.0; migration or parallel support might be required.\n  - Switching RESTlet versions can change API response formats and behaviors, potentially impacting downstream systems.\n  - Ensure proper version control and rollback plans are in place when changing this setting.\n  **Dependency chain**\n  - Depends on the deployment and availability of SuiteScript 2.0 RESTlets within the NetSuite account.\n  - Requires that the API client and integration logic support the RESTlet version selected.\n  - May depend on other configuration settings related to authentication and script permissions.\n  **Technical details**\n  - SuiteScript 2.0 RESTlets use the AMD module format and support"},"restletVersion":{"type":"object","properties":{"type":{"type":"string","description":"The type property specifies the version type of the NetSuite Restlet being used. It determines the specific version or variant of the Restlet API that the integration will interact with, ensuring compatibility and correct functionality. This property is essential for correctly routing requests, handling responses, and maintaining alignment with the expected API contract for the chosen Restlet version.\n\n**Field behavior**\n- Defines the version category or variant of the NetSuite Restlet API.\n- Influences request formatting, response parsing, and available features.\n- Determines which API endpoints and methods are accessible.\n- May impact authentication mechanisms and data serialization formats.\n- Ensures that the integration communicates with the correct Restlet version to prevent incompatibility issues.\n\n**Implementation guidance**\n- Use only predefined and officially supported Restlet version types provided by NetSuite.\n- Validate the type value against the current list of supported Restlet versions before deployment.\n- Update the type property when upgrading to a newer Restlet version or switching to a different variant.\n- Include the type property within the restletVersion object to explicitly specify the API version.\n- Coordinate changes to this property with client applications and integration workflows to maintain compatibility.\n- Monitor NetSuite release notes and documentation for any changes or deprecations related to Restlet versions.\n\n**Examples**\n- \"1.0\" — specifying the stable Restlet API version 1.0.\n- \"2.0\" — specifying the newer Restlet API version 2.0 with enhanced features.\n- \"beta\" — indicating a beta or experimental Restlet version for testing purposes.\n- \"custom\" — representing a custom or extended Restlet version tailored for specific use cases.\n\n**Important notes**\n- Providing an incorrect or unsupported type value can cause API calls to fail or behave unpredictably.\n- The type must be consistent with the NetSuite environment configuration and deployment settings.\n- Changing the type may necessitate updates in client-side code, authentication flows, and data handling logic.\n- Always consult the latest official NetSuite documentation to verify supported Restlet versions and their characteristics.\n- The type property is critical for maintaining long-term integration stability and compatibility.\n\n**Dependency chain**\n- Depends on the restlet object to define the overall Restlet configuration.\n- Influences other properties related to authentication, endpoint URLs, and data formats within the restletVersion context.\n- May affect downstream processing components that rely on version-specific behaviors.\n\n**Technical details**\n- Typically represented as a string value"},"enum":{"type":"array","items":{"type":"object"},"description":"A list of predefined string values that the `restletVersion` property can accept, representing the supported versions of the NetSuite RESTlet API. This enumeration restricts the input to specific allowed versions to ensure compatibility, prevent invalid version assignments, and facilitate validation and user interface enhancements.\n\n**Field behavior**\n- Defines the complete set of valid version identifiers for the `restletVersion` property.\n- Ensures that only officially supported RESTlet API versions can be selected or submitted.\n- Enables validation mechanisms to reject unsupported or malformed version inputs.\n- Supports auto-completion and dropdown selections in user interfaces and API clients.\n- Helps maintain consistency and compatibility across different API integrations and deployments.\n\n**Implementation guidance**\n- Populate the enum with all currently supported NetSuite RESTlet API versions as defined by official documentation.\n- Regularly update the enum values to reflect newly released versions or deprecated ones.\n- Use clear and consistent string formats that match the official versioning scheme (e.g., semantic versions like \"1.0\", \"2.0\" or date-based versions like \"2023.1\").\n- Implement strict validation logic to reject any input not included in the enum.\n- Consider backward compatibility when adding or removing enum values to avoid breaking existing integrations.\n\n**Examples**\n- [\"1.0\", \"2.0\", \"2.1\"]\n- [\"2023.1\", \"2023.2\", \"2024.1\"]\n- [\"v1\", \"v2\", \"v3\"] (if applicable based on versioning scheme)\n\n**Important notes**\n- Enum values must strictly align with the official NetSuite RESTlet API versioning scheme to ensure correctness.\n- Using a version value outside this enum should trigger a validation error and prevent API calls or configuration saves.\n- The enum acts as a safeguard against runtime errors caused by unsupported or invalid version usage.\n- Changes to this enum should be communicated clearly to all API consumers to manage version compatibility.\n\n**Dependency chain**\n- This enum is directly associated with and constrains the `restletVersion` property.\n- Updates to supported RESTlet API versions necessitate corresponding updates to this enum.\n- Validation logic and UI components rely on this enum to enforce version correctness.\n- Downstream processes that depend on the `restletVersion` value are indirectly dependent on this enum’s accuracy.\n\n**Technical details**\n- Implemented as a string enumeration type within the API schema.\n- Used by validation middleware or schema"},"lowercase":{"type":"object","description":"lowercase: Specifies whether the restlet version string should be converted to lowercase characters to ensure consistent formatting.\n\n**Field behavior**\n- Determines if the restlet version identifier is transformed entirely to lowercase characters.\n- When set to true, the version string is converted to lowercase before any further processing or output.\n- When set to false or omitted, the version string retains its original casing as provided.\n- Affects only the textual representation of the version string, not its underlying value or meaning.\n\n**Implementation guidance**\n- Use this property to enforce uniform casing for version strings, particularly when interacting with case-sensitive systems or APIs.\n- Validate that the value assigned is a boolean (true or false).\n- Define a default behavior (commonly false) when the property is not explicitly set.\n- Apply the lowercase transformation early in the processing pipeline to maintain consistency.\n- Ensure that downstream components respect the transformed casing if this property is enabled.\n\n**Examples**\n- `true` — The version string \"V1.0\" is converted and output as \"v1.0\".\n- `false` — The version string \"V1.0\" remains unchanged as \"V1.0\".\n- Property omitted — The version string casing remains as originally provided.\n\n**Important notes**\n- Altering the casing of the version string may impact integrations with external systems that are case-sensitive.\n- Confirm compatibility with all consumers of the version string before enabling this property.\n- This property does not modify the semantic meaning or version number, only its textual case.\n- Should be used consistently across all instances where the version string is handled to avoid discrepancies.\n\n**Dependency chain**\n- Requires a valid restlet version string to perform the lowercase transformation.\n- May be used in conjunction with other formatting or validation properties related to the restlet version.\n- The effect of this property should be considered when performing version comparisons or logging.\n\n**Technical details**\n- Implemented as a boolean flag controlling whether to apply a lowercase function to the version string.\n- The transformation typically involves invoking a standard string lowercase method/function.\n- Should be executed before any version string comparisons, storage, or output operations.\n- Does not affect the internal representation of the version beyond its string casing."}},"description":"restletVersion specifies the version of the NetSuite Restlet script to be used for the API call. This property determines which version of the Restlet script is invoked, ensuring compatibility and proper execution of the request. It allows precise control over which iteration of the Restlet logic is executed, facilitating version management and smooth transitions between script updates.\n\n**Field behavior**\n- Defines the specific version of the Restlet script to target for the API request.\n- Influences the behavior, output, and compatibility of the API response based on the selected script version.\n- Enables management of multiple Restlet script versions within the same NetSuite environment.\n- Ensures that the API call executes the intended logic corresponding to the specified version.\n- Helps prevent conflicts or errors arising from script changes or updates.\n\n**Implementation guidance**\n- Set this property to exactly match the version identifier of the deployed Restlet script in NetSuite.\n- Confirm that the specified version is properly deployed and active in the NetSuite account before use.\n- Adopt a clear and consistent versioning scheme (e.g., semantic versioning, date-based, or custom tags) to avoid ambiguity.\n- Update this property whenever switching to a newer or different Restlet script version to reflect the intended logic.\n- Validate the version string format to prevent malformed or unsupported values.\n- Coordinate version updates with deployment and testing processes to ensure smooth transitions.\n\n**Examples**\n- \"1.0\"\n- \"2.1\"\n- \"2023.1\"\n- \"v3\"\n- \"release-2024-06\"\n\n**Important notes**\n- Specifying an incorrect or non-existent version will cause the API call to fail or produce unexpected results.\n- Proper versioning supports backward compatibility and controlled feature rollouts.\n- Always verify the Restlet script version in the NetSuite environment before making API calls.\n- Version mismatches can lead to errors, data inconsistencies, or unsupported operations.\n- This property is critical for environments where multiple Restlet versions coexist.\n\n**Dependency chain**\n- Depends on the Restlet scripts deployed and versioned within the NetSuite environment.\n- Related to the authentication and authorization context of the API call, as permissions may vary by script version.\n- Works in conjunction with other NetSuite API properties such as scriptId and deploymentId to fully identify the target Restlet.\n- May interact with environment-specific configurations or feature flags tied to particular versions.\n\n**Technical details**\n- Typically represented as a string"},"criteria":{"type":"object","properties":{"type":{"type":"array","items":{"type":"object"},"description":"The type property specifies the category or classification of the criteria used within the NetSuite RESTlet API. It defines the nature or kind of the criteria being applied to filter or query data, enabling precise targeting of records based on their domain or entity type.\n\n**Field behavior**\n- Determines the specific category or classification of the criteria.\n- Influences how the criteria is interpreted and processed by the API.\n- Helps in filtering or querying data based on the defined type.\n- Typically expects a predefined set of values corresponding to valid criteria types.\n- Acts as a key discriminator that guides the API in applying the correct schema and validation rules for the criteria.\n- May affect the available fields and operators applicable within the criteria.\n\n**Implementation guidance**\n- Validate the input against the allowed set of criteria types to ensure correctness and prevent errors.\n- Use consistent, descriptive, and case-sensitive naming conventions for the type values as defined by NetSuite.\n- Ensure that the specified type aligns with the corresponding criteria structure and expected data fields.\n- Document all possible values and their meanings clearly for API consumers to facilitate correct usage.\n- Implement error handling to provide meaningful feedback when unsupported or invalid types are supplied.\n- Keep the list of valid types updated in accordance with changes in NetSuite API versions and account configurations.\n\n**Examples**\n- \"customer\" — to specify criteria related to customer records.\n- \"transaction\" — to filter based on transaction data such as sales orders or invoices.\n- \"item\" — to apply criteria on inventory or product items.\n- \"employee\" — to target employee-related data.\n- \"vendor\" — to filter vendor or supplier records.\n- \"customrecord_xyz\" — to specify criteria for a custom record type identified by its script ID.\n\n**Important notes**\n- The type value directly affects the behavior of the criteria and the resulting data set.\n- Incorrect or unsupported type values may lead to API errors, empty results, or unexpected behavior.\n- The set of valid types may vary depending on the NetSuite account configuration, customizations, and API version.\n- Always refer to the latest official NetSuite API documentation and your account’s schema for supported types.\n- Some types may require additional permissions or roles to access the corresponding data.\n- The type property is mandatory for criteria filtering to function correctly.\n\n**Dependency chain**\n- Depends on the overall criteria object structure within NetSuite.restlet.criteria.\n- Influences the available fields, operators, and values within the criteria definition"},"join":{"type":"string","description":"join: Specifies the criteria used to join related records or tables in a NetSuite RESTlet query, enabling the retrieval of data based on relationships between different record types.\n**Field behavior**\n- Defines the relationship or link between the primary record and a related record for filtering or data retrieval.\n- Determines how records from different tables are combined based on matching fields.\n- Supports nested joins to allow complex queries involving multiple related records.\n**Implementation guidance**\n- Use valid join names as defined in NetSuite’s schema or documentation for the specific record types.\n- Ensure the join criteria align with the intended relationship to avoid incorrect or empty query results.\n- Combine with appropriate filters on the joined records to refine query results.\n- Validate join paths to prevent errors during query execution.\n**Examples**\n- Joining a customer record to its related sales orders using \"salesOrder\" as the join.\n- Using \"item\" join to filter transactions based on item attributes.\n- Nested join example: joining from a sales order to its customer and then to the customer’s address.\n**Important notes**\n- Incorrect join names or paths can cause query failures or unexpected results.\n- Joins may impact query performance; use them judiciously.\n- Not all record types support all possible joins; consult NetSuite documentation.\n- Joins are case-sensitive and must match NetSuite’s API specifications exactly.\n**Dependency chain**\n- Depends on the base record type specified in the query.\n- Works in conjunction with filter criteria to refine results.\n- May depend on authentication and permissions to access related records.\n**Technical details**\n- Typically represented as a string or object indicating the join path.\n- Used within the criteria object of a RESTlet query payload.\n- Supports multiple levels of nesting for complex joins.\n- Must conform to NetSuite’s SuiteScript or RESTlet API join syntax and conventions."},"operator":{"type":"string","description":"operator: >\n  Specifies the comparison operator used to evaluate the criteria in a NetSuite RESTlet request.\n  This operator determines how the field value is compared against the specified criteria value(s) to filter or query records.\n  **Field behavior**\n  - Defines the type of comparison between a field and a value (e.g., equality, inequality, greater than).\n  - Influences the logic of the criteria evaluation in RESTlet queries.\n  - Supports various operators such as equals, not equals, greater than, less than, contains, etc.\n  **Implementation guidance**\n  - Use valid NetSuite-supported operators to ensure correct query behavior.\n  - Match the operator type with the data type of the field being compared (e.g., use numeric operators for numeric fields).\n  - Combine multiple criteria with appropriate logical operators if needed.\n  - Validate operator values to prevent errors in RESTlet execution.\n  **Examples**\n  - \"operator\": \"is\" (checks if the field value is equal to the specified value)\n  - \"operator\": \"isnot\" (checks if the field value is not equal to the specified value)\n  - \"operator\": \"greaterthan\" (checks if the field value is greater than the specified value)\n  - \"operator\": \"contains\" (checks if the field value contains the specified substring)\n  **Important notes**\n  - The operator must be compatible with the field type and the value provided.\n  - Incorrect operator usage can lead to unexpected query results or errors.\n  - Operators are case-sensitive and should match NetSuite's expected operator strings.\n  **Dependency chain**\n  - Depends on the field specified in the criteria to determine valid operators.\n  - Works in conjunction with the criteria value(s) to form a complete condition.\n  - May be combined with logical operators when multiple criteria are used.\n  **Technical details**\n  - Typically represented as a string value in the RESTlet criteria JSON object.\n  - Supported operators align with NetSuite's SuiteScript search operators.\n  - Must conform to the list of operators recognized by the NetSuite RESTlet API."},"searchValue":{"type":"object","description":"searchValue: The value used as the search criterion to filter results in the NetSuite RESTlet API. This value is matched against the specified search field to retrieve relevant records based on the search parameters provided.\n**Field behavior**\n- Acts as the primary input for filtering search results.\n- Supports various data types depending on the search field (e.g., string, number, date).\n- Used in conjunction with other search criteria to refine query results.\n- Can be a partial or full match depending on the search configuration.\n**Implementation guidance**\n- Ensure the value type matches the expected type of the search field.\n- Validate the input to prevent injection attacks or malformed queries.\n- Use appropriate encoding if the value contains special characters.\n- Combine with logical operators or additional criteria for complex searches.\n**Examples**\n- \"Acme Corporation\" for searching customer names.\n- 1001 for searching by internal record ID.\n- \"2024-01-01\" for searching records created on or after a specific date.\n- \"Pending\" for filtering records by status.\n**Important notes**\n- The effectiveness of the search depends on the accuracy and format of the searchValue.\n- Case sensitivity may vary based on the underlying NetSuite configuration.\n- Large or complex search values may impact performance.\n- Null or empty values may result in no filtering or return all records.\n**Dependency chain**\n- Depends on the searchField property to determine which field the searchValue applies to.\n- Works alongside searchOperator to define how the searchValue is compared.\n- Influences the results returned by the RESTlet endpoint.\n**Technical details**\n- Typically passed as a string in the API request payload.\n- May require serialization or formatting based on the API specification.\n- Integrated into the NetSuite search query logic on the server side.\n- Subject to NetSuite’s search limitations and indexing capabilities."},"searchValue2":{"type":"object","description":"searchValue2 is an optional property used to specify the second value in a search criterion within the NetSuite RESTlet API. It is typically used in conjunction with search operators that require two values, such as \"between\" or \"not between,\" to define a range or a pair of comparison values.\n\n**Field behavior**\n- Represents the second operand or value in a search condition.\n- Used primarily with operators that require two values (e.g., \"between\", \"not between\").\n- Optional field; may be omitted if the operator only requires a single value.\n- Works alongside searchValue (the first value) to form a complete search criterion.\n\n**Implementation guidance**\n- Ensure that searchValue2 is provided only when the selected operator requires two values.\n- Validate the data type of searchValue2 to match the expected type for the field being searched (e.g., date, number, string).\n- When using range-based operators, searchValue2 should represent the upper bound or second boundary of the range.\n- If the operator does not require a second value, omit this property to avoid errors.\n\n**Examples**\n- For a date range search: searchValue = \"2023-01-01\", searchValue2 = \"2023-12-31\" with operator \"between\".\n- For a numeric range: searchValue = 100, searchValue2 = 200 with operator \"between\".\n- For a \"not between\" operator: searchValue = 50, searchValue2 = 100.\n\n**Important notes**\n- Providing searchValue2 without a compatible operator may result in an invalid search query.\n- The data type and format of searchValue2 must be consistent with searchValue and the field being queried.\n- This property is ignored if the operator only requires a single value.\n- Proper validation and error handling should be implemented when processing this field.\n\n**Dependency chain**\n- Dependent on the \"operator\" property within the same search criterion.\n- Works in conjunction with \"searchValue\" to define the search condition.\n- Part of the \"criteria\" array or object in the NetSuite RESTlet search request.\n\n**Technical details**\n- Data type varies depending on the field being searched (string, number, date, etc.).\n- Typically serialized as a JSON property in the RESTlet request payload.\n- Must conform to the expected format for the field and operator to avoid API errors.\n- Used internally by NetSuite to construct the appropriate"},"formula":{"type":"string","description":"formula: >\n  A string representing a custom formula used to define criteria for filtering or querying data within the NetSuite RESTlet API.\n  This formula allows users to specify complex conditions using NetSuite's formula syntax, enabling advanced and flexible data retrieval.\n  **Field behavior**\n  - Accepts a formula expression as a string that defines custom filtering logic.\n  - Used to create dynamic and complex criteria beyond standard field-value comparisons.\n  - Evaluated by the NetSuite backend to filter records according to the specified logic.\n  - Can incorporate NetSuite formula functions, operators, and field references.\n  **Implementation guidance**\n  - Ensure the formula syntax complies with NetSuite's formula language and supported functions.\n  - Validate the formula string before submission to avoid runtime errors.\n  - Use this field when standard criteria fields are insufficient for the required filtering.\n  - Combine with other criteria fields as needed to build comprehensive queries.\n  **Examples**\n  - \"CASE WHEN {status} = 'Open' THEN 1 ELSE 0 END = 1\"\n  - \"TO_DATE({createddate}) >= TO_DATE('2023-01-01')\"\n  - \"NVL({amount}, 0) > 1000\"\n  **Important notes**\n  - Incorrect or invalid formulas may cause the API request to fail or return errors.\n  - The formula must be compatible with the context of the query and the fields available.\n  - Performance may be impacted if complex formulas are used extensively.\n  - Formula evaluation is subject to NetSuite's formula engine capabilities and limitations.\n  **Dependency chain**\n  - Depends on the availability of fields referenced within the formula.\n  - Works in conjunction with other criteria properties in the request.\n  - Requires understanding of NetSuite's formula syntax and functions.\n  **Technical details**\n  - Data type: string.\n  - Supports NetSuite formula syntax including SQL-like expressions and functions.\n  - Evaluated server-side during the processing of the RESTlet request.\n  - Must be URL-encoded if included in query parameters of HTTP requests."},"_id":{"type":"object","description":"Unique identifier for the record within the NetSuite system.\n**Field behavior**\n- Serves as the primary key to uniquely identify a specific record.\n- Immutable once the record is created.\n- Used to retrieve, update, or delete the corresponding record.\n**Implementation guidance**\n- Must be a valid NetSuite internal ID format, typically a string or numeric value.\n- Should be provided when querying or manipulating a specific record.\n- Avoid altering this value to maintain data integrity.\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"abcde12345\"\n**Important notes**\n- This ID is assigned by NetSuite and should not be generated manually.\n- Ensure the ID corresponds to an existing record to avoid errors.\n- When used in criteria, it filters the dataset to the exact record matching this ID.\n**Dependency chain**\n- Dependent on the existence of the record in NetSuite.\n- Often used in conjunction with other criteria fields for precise querying.\n**Technical details**\n- Typically represented as a string or integer data type.\n- Used in RESTlet scripts as part of the criteria object to specify the target record.\n- May be included in URL parameters or request bodies depending on the API design."}},"description":"criteria: >\n  Defines the set of conditions or filters used to specify which records should be retrieved or affected by the NetSuite RESTlet operation. This property enables clients to precisely narrow down the dataset by applying one or more criteria based on record fields, comparison operators, and values, supporting complex logical combinations to tailor the query results.\n\n  **Field behavior**\n  - Accepts a structured object or an array representing one or multiple filtering conditions.\n  - Supports logical operators such as AND, OR, and nested groupings to combine multiple criteria flexibly.\n  - Each criterion typically includes a field name, an operator (e.g., equals, contains, greaterThan), and a value or set of values.\n  - Enables filtering on various data types including strings, numbers, dates, and booleans.\n  - Used to limit the scope of data returned or manipulated by the RESTlet to only those records that meet the specified conditions.\n  - When omitted or empty, the RESTlet may return all records or apply default filtering behavior as defined by the implementation.\n\n  **Implementation guidance**\n  - Validate the criteria structure rigorously to ensure it conforms to the expected schema before processing.\n  - Support nested criteria groups to allow complex and hierarchical filtering logic.\n  - Map criteria fields and operators accurately to corresponding NetSuite record fields and search operators, considering data types and operator compatibility.\n  - Handle empty or undefined criteria gracefully by returning all records or applying sensible default filters.\n  - Sanitize all input values to prevent injection attacks, malformed queries, or unexpected behavior.\n  - Provide clear error messages when criteria are invalid or unsupported.\n  - Optimize query performance by translating criteria into efficient NetSuite search queries.\n\n  **Examples**\n  - A single criterion filtering records where status equals \"Open\":\n    `{ \"field\": \"status\", \"operator\": \"equals\", \"value\": \"Open\" }`\n  - Multiple criteria combined with AND logic:\n    `[{\"field\": \"status\", \"operator\": \"equals\", \"value\": \"Open\"}, {\"field\": \"priority\", \"operator\": \"greaterThan\", \"value\": 2}]`\n  - Nested criteria combining OR and AND:\n    `{ \"operator\": \"OR\", \"criteria\": [ {\"field\": \"status\", \"operator\": \"equals\", \"value\": \"Open\"}, { \"operator\": \"AND\", \"criteria\": [ {\"field\": \"priority\", \"operator\": \"greaterThan\", \"value\":"},"columns":{"type":"object","properties":{"type":{"type":"array","items":{"type":"object"},"description":"The type property specifies the data type of the column in the NetSuite RESTlet response. It defines how the data in the column should be interpreted, validated, and handled by the client application to ensure accurate processing and display.\n\n**Field behavior**\n- Indicates the specific data type of the column (e.g., string, integer, date).\n- Determines the format, validation rules, and parsing logic applied to the column data.\n- Guides client applications in correctly interpreting and processing the data.\n- Influences data presentation, transformation, and serialization in the user interface or downstream systems.\n- Helps enforce data consistency and integrity across different components consuming the API.\n\n**Implementation guidance**\n- Use standardized data type names consistent with NetSuite’s native data types and conventions.\n- Ensure the specified type accurately reflects the actual data returned in the column to prevent parsing or runtime errors.\n- Support and validate common data types such as text (string), number (integer, float), date, datetime, boolean, and currency.\n- Validate the type value against a predefined, documented list of acceptable types to maintain consistency.\n- Clearly document any custom or extended data types if they are introduced beyond standard NetSuite types.\n- Consider locale and formatting standards (e.g., ISO 8601 for dates) when defining and interpreting types.\n\n**Examples**\n- \"string\" — for textual or alphanumeric data.\n- \"integer\" — for whole numeric values without decimals.\n- \"float\" — for numeric values with decimals (if supported).\n- \"date\" — for date values without time components, formatted as YYYY-MM-DD.\n- \"datetime\" — for combined date and time values, typically in ISO 8601 format.\n- \"boolean\" — for true/false or yes/no values.\n- \"currency\" — for monetary values, often including currency symbols or codes.\n\n**Important notes**\n- The type property is essential for ensuring data integrity and enabling correct client-side processing and validation.\n- Incorrect or mismatched type specifications can lead to data misinterpretation, parsing failures, or runtime errors.\n- Some data types require strict formatting standards (e.g., ISO 8601 for date and datetime) to ensure interoperability.\n- This property is typically mandatory for each column to guarantee predictable behavior.\n- Changes to the type property should be managed carefully to avoid breaking existing integrations.\n\n**Dependency chain**\n- Depends on the actual data returned by the NetSuite RESTlet for the column.\n- Influences"},"join":{"type":"string","description":"join: Specifies the join relationship to be used when retrieving or manipulating data through the NetSuite RESTlet API. This property defines how related records are linked together, enabling the inclusion of fields from associated records in the query or operation.\n\n**Field behavior**\n- Determines the type of join between the primary record and related records.\n- Enables access to fields from related records by specifying the join path.\n- Influences the scope and depth of data retrieved or affected by the API call.\n- Supports nested joins to traverse multiple levels of related records.\n\n**Implementation guidance**\n- Use valid join names as defined in the NetSuite schema for the specific record type.\n- Ensure the join relationship exists and is supported by the RESTlet endpoint.\n- Combine with column definitions to specify which fields from the joined records to include.\n- Validate join paths to prevent errors or unexpected results in data retrieval.\n- Consider performance implications when using multiple or complex joins.\n\n**Examples**\n- \"customer\" to join the transaction record with the related customer record.\n- \"item\" to join a sales order with the associated item records.\n- \"employee.manager\" to join an employee record with their manager's record.\n- \"vendor\" to join a purchase order with the vendor record.\n\n**Important notes**\n- Incorrect or unsupported join names will result in API errors.\n- Joins are case-sensitive and must match the exact join names defined in NetSuite.\n- Not all record types support all join relationships.\n- The join property works in conjunction with the columns property to specify which fields to retrieve.\n- Using joins may increase the complexity and execution time of the API call.\n\n**Dependency chain**\n- Depends on the base record type being queried or manipulated.\n- Works together with the columns property to define the data structure.\n- May depend on user permissions to access related records.\n- Influences the structure of the response payload.\n\n**Technical details**\n- Represented as a string indicating the join path.\n- Supports dot notation for nested joins (e.g., \"employee.manager\").\n- Used in RESTlet scripts to customize data retrieval.\n- Must conform to NetSuite's internal join naming conventions.\n- Typically included in the columns array objects to specify joined fields."},"summary":{"type":"object","properties":{"type":{"type":"string","description":"Type of the summary column, indicating the aggregation or calculation applied to the data in this column.\n**Field behavior**\n- Specifies the kind of summary operation performed on the column data, such as sum, count, average, minimum, or maximum.\n- Determines how the data in the column is aggregated or summarized in the report or query result.\n- Influences the output format and the meaning of the values in the summary column.\n**Implementation guidance**\n- Use predefined summary types supported by the NetSuite RESTlet API to ensure compatibility.\n- Validate the type value against allowed summary operations to prevent errors.\n- Ensure that the summary type is appropriate for the data type of the column (e.g., sum for numeric fields).\n- Document the summary type clearly to aid users in understanding the aggregation applied.\n**Examples**\n- \"SUM\" — calculates the total sum of the values in the column.\n- \"COUNT\" — counts the number of entries or records.\n- \"AVG\" — computes the average value.\n- \"MIN\" — finds the minimum value.\n- \"MAX\" — finds the maximum value.\n**Important notes**\n- The summary type must be supported by the underlying NetSuite system to function correctly.\n- Incorrect summary types may lead to errors or misleading data in reports.\n- Some summary types may not be applicable to certain data types (e.g., average on text fields).\n**Dependency chain**\n- Depends on the column data type to determine valid summary types.\n- Interacts with the overall report or query configuration to produce summarized results.\n- May affect downstream processing or display logic based on the summary output.\n**Technical details**\n- Typically represented as a string value corresponding to the summary operation.\n- Mapped internally to NetSuite’s summary functions in saved searches or reports.\n- Case-insensitive but recommended to use uppercase for consistency.\n- Must conform to the enumeration of allowed summary types defined by the API."},"enum":{"type":"object","description":"enum: >\n  Specifies the set of predefined constant values that the property can take, representing an enumeration.\n  **Field behavior**\n  - Defines a fixed list of allowed values for the property.\n  - Restricts the property's value to one of the enumerated options.\n  - Used to enforce data integrity and consistency.\n  **Implementation guidance**\n  - Enumerated values should be clearly defined and documented.\n  - Use meaningful and descriptive names for each enum value.\n  - Ensure the enum list is exhaustive for the intended use case.\n  - Validate input against the enum values to prevent invalid data.\n  **Examples**\n  - [\"Pending\", \"Approved\", \"Rejected\"]\n  - [\"Small\", \"Medium\", \"Large\"]\n  - [\"Red\", \"Green\", \"Blue\"]\n  **Important notes**\n  - Enum values are case-sensitive unless otherwise specified.\n  - Adding or removing enum values may impact backward compatibility.\n  - Enum should be used when the set of possible values is known and fixed.\n  **Dependency chain**\n  - Typically used in conjunction with the property type (e.g., string or integer).\n  - May influence validation logic and UI dropdown options.\n  **Technical details**\n  - Represented as an array of strings or numbers defining allowed values.\n  - Often implemented as a constant or static list in code.\n  - Used by client and server-side validation mechanisms."},"lowercase":{"type":"object","description":"A boolean property indicating whether the summary column values should be converted to lowercase.\n\n**Field behavior**\n- When set to true, all text values in the summary column are transformed to lowercase.\n- When set to false or omitted, the original casing of the summary column values is preserved.\n- Primarily affects string-type summary columns; non-string values remain unaffected.\n\n**Implementation guidance**\n- Use this property to normalize text data for consistent processing or comparison.\n- Ensure that the transformation to lowercase does not interfere with case-sensitive data requirements.\n- Apply this setting during data retrieval or before output formatting in the RESTlet response.\n\n**Examples**\n- `lowercase: true` — converts \"Example Text\" to \"example text\".\n- `lowercase: false` — retains \"Example Text\" as is.\n- Property omitted — defaults to no case transformation.\n\n**Important notes**\n- This property only affects the summary columns specified in the RESTlet response.\n- It does not modify the underlying data in NetSuite, only the output representation.\n- Use with caution when case sensitivity is important for downstream processing.\n\n**Dependency chain**\n- Depends on the presence of summary columns in the RESTlet response.\n- May interact with other formatting or transformation properties applied to columns.\n\n**Technical details**\n- Implemented as a boolean flag within the summary column configuration.\n- Transformation is applied at the data serialization stage before sending the response.\n- Compatible with string data types; other data types bypass this transformation."}},"description":"A brief textual summary providing an overview or key points related to the item or record represented by this property. This summary is intended to give users a quick understanding without needing to delve into detailed data.\n\n**Field behavior**\n- Contains concise, human-readable text summarizing the main aspects or highlights of the associated record.\n- Typically used in list views, reports, or previews where a quick reference is needed.\n- May be displayed in user interfaces to provide context or a snapshot of the data.\n- Can be optional or mandatory depending on the specific API implementation.\n\n**Implementation guidance**\n- Ensure the summary is clear, concise, and informative, avoiding overly technical jargon unless appropriate.\n- Limit the length to a reasonable number of characters to maintain readability in UI components.\n- Sanitize input to prevent injection of malicious content if the summary is user-generated.\n- Support localization if the API serves multi-lingual environments.\n- Update the summary whenever the underlying data it represents changes to keep it accurate.\n\n**Examples**\n- \"Quarterly sales report showing a 15% increase in revenue.\"\n- \"Customer profile summary including recent orders and contact info.\"\n- \"Inventory item overview highlighting stock levels and reorder status.\"\n- \"Project status summary indicating milestones achieved and pending tasks.\"\n\n**Important notes**\n- The summary should not contain sensitive or confidential information unless properly secured.\n- It is distinct from detailed descriptions or full data fields; it serves as a high-level overview.\n- Consistency in style and format across summaries improves user experience.\n- May be truncated in some UI contexts; design summaries to convey essential information upfront.\n\n**Dependency chain**\n- Often derived from or related to other detailed fields within the record.\n- May influence or be influenced by display components or reporting modules consuming this property.\n- Could be linked to user permissions determining visibility of summary content.\n\n**Technical details**\n- Data type is typically a string.\n- May have a maximum length constraint depending on API or database schema.\n- Stored and transmitted as plain text; consider encoding if special characters are used.\n- Accessible via the RESTlet API under the path `netsuite.restlet.columns.summary`."},"formula":{"type":"string","description":"formula: >\n  A string representing a custom formula used to calculate or derive values dynamically within the context of the NetSuite RESTlet columns. This formula can include field references, operators, and functions supported by NetSuite's formula syntax to perform computations or conditional logic on record data.\n\n  **Field behavior:**\n  - Accepts a formula expression as a string that defines how to compute the column's value.\n  - Can reference other fields, constants, and use NetSuite-supported functions and operators.\n  - Evaluated at runtime to produce dynamic results based on the current record data.\n  - Used primarily in saved searches, reports, or RESTlet responses to customize output.\n  \n  **Implementation guidance:**\n  - Ensure the formula syntax complies with NetSuite's formula language and supported functions.\n  - Validate the formula string to prevent errors during execution.\n  - Use field IDs or aliases correctly within the formula to reference data fields.\n  - Test formulas thoroughly in NetSuite UI before deploying via RESTlet to ensure correctness.\n  - Consider performance implications of complex formulas on large datasets.\n  \n  **Examples:**\n  - \"CASE WHEN {status} = 'Open' THEN 1 ELSE 0 END\" — returns 1 if status is Open, else 0.\n  - \"NVL({amount}, 0) * 0.1\" — calculates 10% of the amount, treating null as zero.\n  - \"TO_CHAR({trandate}, 'YYYY-MM-DD')\" — formats the transaction date as a string.\n  \n  **Important notes:**\n  - The formula must be compatible with the context in which it is used (e.g., search column, RESTlet).\n  - Incorrect formulas can cause runtime errors or unexpected results.\n  - Some functions or operators may not be supported depending on the NetSuite version or API context.\n  - Formula evaluation respects user permissions and data visibility.\n  \n  **Dependency chain:**\n  - Depends on the availability of referenced fields within the record or search context.\n  - Relies on NetSuite's formula parsing and evaluation engine.\n  - Interacts with the RESTlet execution environment to produce output.\n  \n  **Technical details:**\n  - Data type: string containing a formula expression.\n  - Supports NetSuite formula syntax including SQL-like CASE statements, arithmetic operations, and built-in functions.\n  - Evaluated server-side during RESTlet execution or saved search processing.\n  - Must be URL-"},"label":{"type":"string","description":"label: |\n  The display name or title of the column as it appears in the user interface or reports.\n  **Field behavior**\n  - Represents the human-readable name for a column in a dataset or report.\n  - Used to identify the column in UI elements such as tables, forms, or export files.\n  - Should be concise yet descriptive enough to convey the column’s content.\n  **Implementation guidance**\n  - Ensure the label is localized if the application supports multiple languages.\n  - Avoid using technical jargon; prefer user-friendly terminology.\n  - Keep the label length reasonable to prevent UI truncation.\n  - Update the label consistently when the underlying data or purpose changes.\n  **Examples**\n  - \"Customer Name\"\n  - \"Invoice Date\"\n  - \"Total Amount\"\n  - \"Status\"\n  **Important notes**\n  - The label does not affect the data or the column’s functionality; it is purely for display.\n  - Changing the label does not impact data processing or storage.\n  - Labels should be unique within the same context to avoid confusion.\n  **Dependency chain**\n  - Depends on the column definition within the dataset or report configuration.\n  - May be linked to localization resources if internationalization is supported.\n  **Technical details**\n  - Typically a string data type.\n  - May support Unicode characters for internationalization.\n  - Stored as metadata associated with the column definition in the system."},"sort":{"type":"boolean","description":"sort: >\n  Specifies the sorting order for the column values in the query results.\n  **Field behavior**\n  - Determines the order in which the data is returned based on the column values.\n  - Accepts values that indicate ascending or descending order.\n  - Influences how the dataset is organized before being returned by the API.\n  **Implementation guidance**\n  - Use standardized values such as \"asc\" for ascending and \"desc\" for descending.\n  - Ensure the sort parameter corresponds to a valid column in the dataset.\n  - Multiple sort parameters may be supported to define secondary sorting criteria.\n  - Validate the sort value to prevent errors or unexpected behavior.\n  **Examples**\n  - \"asc\" to sort the column values in ascending order.\n  - \"desc\" to sort the column values in descending order.\n  **Important notes**\n  - Sorting can impact performance, especially on large datasets.\n  - If no sort parameter is provided, the default sorting behavior of the API applies.\n  - Sorting is case-insensitive.\n  **Dependency chain**\n  - Depends on the column specified in the query or request.\n  - May interact with pagination parameters to determine the final data output.\n  **Technical details**\n  - Typically implemented as a string value in the API request.\n  - May be part of a query string or request body depending on the API design.\n  - Sorting logic is handled server-side before data is returned to the client."}},"description":"columns: >\n  Specifies the set of columns (fields) to be retrieved or manipulated in the NetSuite RESTlet operation. This property defines which specific fields from the records should be included in the response or used during processing, enabling precise control over the data returned or affected. By selecting only relevant columns, it helps optimize performance and reduce payload size, ensuring efficient data handling tailored to the operation’s requirements.\n\n  **Field behavior**\n  - Determines the exact fields (columns) to be included in data retrieval, update, or manipulation operations.\n  - Supports specifying multiple columns to customize the dataset returned or processed.\n  - Limits the data payload by including only the specified columns, improving performance and reducing bandwidth.\n  - Influences the structure, content, and size of the response from the RESTlet.\n  - If omitted, defaults to retrieving all available columns for the target record type, which may impact performance.\n  - Columns specified must be valid and accessible for the target record type to avoid errors.\n\n  **Implementation guidance**\n  - Accepts an array or list of column identifiers, which can be simple strings or objects with detailed specifications (e.g., `{ name: \"fieldname\" }`).\n  - Column identifiers should correspond exactly to valid NetSuite record field names or internal IDs.\n  - Validate column names against the target record schema before execution to prevent runtime errors.\n  - Use this property to optimize RESTlet calls by limiting data to only necessary fields, especially in large datasets.\n  - When specifying complex columns (e.g., joined fields or formula fields), ensure the correct syntax and structure are used.\n  - Consider the permissions and roles associated with the RESTlet user to ensure access to the specified columns.\n\n  **Examples**\n  - `[\"internalid\", \"entityid\", \"email\"]` — retrieves basic identifying and contact fields.\n  - `[ { name: \"internalid\" }, { name: \"entityid\" }, { name: \"email\" } ]` — object notation for specifying columns.\n  - `[\"tranid\", \"amount\", \"status\"]` — retrieves transaction-specific fields.\n  - `[ { name: \"custbody_custom_field\" }, { name: \"createddate\" } ]` — includes custom and system fields.\n  - `[\"item\", \"quantity\", \"rate\"]` — fields relevant to item records or line items.\n\n  **Important notes**\n  - Omitting the `columns` property typically"},"markExportedBatchSize":{"type":"object","properties":{"type":{"type":"number","description":"type: >\n  Specifies the data type of the `markExportedBatchSize` property, defining the kind of value it accepts or represents. This property is crucial for ensuring that the batch size value is correctly interpreted, validated, and processed by the API. It dictates how the value is serialized and deserialized during API communication, thereby maintaining data integrity and consistency across different system components.\n  **Field behavior**\n  - Determines the expected format and constraints of the `markExportedBatchSize` value.\n  - Influences validation rules applied to the batch size input to prevent invalid data.\n  - Guides serialization and deserialization processes for accurate data exchange.\n  - Ensures compatibility with client and server-side processing logic.\n  **Implementation guidance**\n  - Must be assigned a valid and recognized data type within the API schema, such as \"integer\" or \"string\".\n  - Should align precisely with the nature of the batch size value to avoid type mismatches.\n  - Implement strict validation checks to confirm the value conforms to the specified type before processing.\n  - Consider the implications of the chosen type on downstream processing and storage.\n  **Examples**\n  - `\"integer\"` — indicating the batch size is represented as a whole number.\n  - `\"string\"` — if the batch size is provided as a textual representation.\n  - `\"number\"` — for numeric values that may include decimals (less common for batch sizes).\n  **Important notes**\n  - The `type` must consistently reflect the actual data format of `markExportedBatchSize` to prevent runtime errors.\n  - Mismatched or incorrect type declarations can cause API failures, data corruption, or unexpected behavior.\n  - Changes to this property’s type should be carefully managed to maintain backward compatibility.\n  **Dependency chain**\n  - Directly defines the data handling of the `markExportedBatchSize` property.\n  - Affects validation logic and error handling in API endpoints related to batch processing.\n  - May impact client applications that consume or provide this property’s value.\n  **Technical details**\n  - Corresponds to standard JSON data types such as integer, string, boolean, etc.\n  - Utilized by the API framework to enforce type safety, ensuring data integrity during request and response cycles.\n  - Plays a role in schema validation tools and automated documentation generation.\n  - Influences serialization libraries in encoding and decoding the property value correctly."},"cLocked":{"type":"object","description":"cLocked indicates whether the batch size setting for marking exports is locked, preventing any modifications by users or automated processes. This property serves as a control mechanism to safeguard critical configuration parameters related to export batch processing.\n\n**Field behavior**\n- Represents a boolean flag that determines if the batch size configuration is immutable.\n- When set to true, the batch size cannot be altered via the user interface, API calls, or automated scripts.\n- When set to false, the batch size remains configurable and can be adjusted as operational needs evolve.\n- Changes to this flag directly affect the ability to update the batch size setting.\n\n**Implementation guidance**\n- Utilize this flag to enforce configuration stability and prevent accidental or unauthorized changes to batch size settings.\n- Validate this flag before processing any update requests to the batch size to ensure compliance.\n- Typically managed by system administrators or during initial system setup to lock down critical parameters.\n- Incorporate audit logging when this flag is changed to maintain traceability.\n- Consider integrating with role-based access controls to restrict who can toggle this flag.\n\n**Examples**\n- cLocked: true — The batch size setting is locked, disallowing any modifications.\n- cLocked: false — The batch size setting is unlocked and can be updated as needed.\n\n**Important notes**\n- Locking the batch size helps maintain consistent export throughput and prevents performance degradation caused by unintended configuration changes.\n- Modifications to this flag should be performed cautiously and ideally under change management procedures.\n- This property is only applicable in environments where batch size configuration for marking exports is relevant.\n- Ensure that dependent systems or processes respect this lock to avoid configuration conflicts.\n\n**Dependency chain**\n- Dependent on the presence of the markExportedBatchSize configuration object within the system.\n- Interacts with user permission settings and roles that govern configuration management capabilities.\n- May affect downstream export processing workflows that rely on batch size parameters.\n\n**Technical details**\n- Data type: Boolean.\n- Default value is false, indicating the batch size is unlocked unless explicitly locked.\n- Persisted as part of the NetSuite.restlet.markExportedBatchSize configuration object.\n- Changes to this property should trigger validation and possibly system notifications to administrators."},"min":{"type":"object","description":"Minimum number of records to process in a single batch during the markExported operation in the NetSuite RESTlet integration. This property sets the lower boundary for batch sizes, ensuring that each batch contains at least this number of records before processing begins. It plays a crucial role in balancing processing efficiency and system resource utilization by controlling the granularity of batch operations.\n\n**Field behavior**\n- Defines the lower limit for the batch size when processing records in the markExported operation.\n- Ensures that each batch contains at least this minimum number of records before processing.\n- Helps optimize performance by preventing excessively small batches that could increase overhead.\n- Works in conjunction with the 'max' batch size to establish a valid range for batch processing.\n- Influences how the system partitions large datasets into manageable chunks for processing.\n\n**Implementation guidance**\n- Choose a value based on system capabilities, expected record complexity, and API rate limits to avoid timeouts or throttling.\n- Ensure this value is a positive integer greater than zero.\n- Must be less than or equal to the corresponding 'max' batch size to maintain logical consistency.\n- Test different values to find an optimal balance between processing speed and resource consumption.\n- Consider the impact on downstream systems and network latency when setting this value.\n\n**Examples**\n- 10 (process at least 10 records per batch)\n- 50 (process at least 50 records per batch)\n- 100 (process at least 100 records per batch for high-throughput scenarios)\n\n**Important notes**\n- Setting this value too low may lead to inefficient processing due to increased overhead from handling many small batches.\n- Setting this value too high may cause processing delays, timeouts, or exceed API rate limits.\n- Always use in conjunction with the 'max' batch size to define a valid and effective batch size range.\n- Changes to this value should be tested in a staging environment before production deployment to assess impact.\n\n**Dependency chain**\n- Directly related to 'netsuite.restlet.markExportedBatchSize.max', which defines the upper limit of batch size.\n- Utilized within the batch processing logic of the markExported operation in the NetSuite RESTlet integration.\n- Influences and is influenced by system performance parameters and API constraints.\n\n**Technical details**\n- Data type: Integer\n- Must be a positive integer greater than zero\n- Should be validated at configuration time to ensure it does not exceed the 'max' batch size"},"max":{"type":"object","description":"Maximum number of records to process in a single batch during the markExported operation, defining the upper limit for batch size to balance performance and resource utilization effectively.\n\n**Field behavior**\n- Specifies the maximum count of records processed in one batch during the markExported operation.\n- Controls the batch size to optimize throughput while preventing system overload.\n- Helps manage memory usage and processing time by limiting batch volume.\n- Directly affects the frequency and size of API calls or processing cycles.\n\n**Implementation guidance**\n- Determine an optimal value based on system capacity, performance benchmarks, and typical workload.\n- Ensure the value complies with any API or platform-imposed batch size limits.\n- Consider network conditions, processing latency, and error handling when setting the batch size.\n- Validate that the input is a positive integer and handle invalid values gracefully.\n- Adjust dynamically if possible, based on runtime metrics or error feedback.\n\n**Examples**\n- 1000: Processes up to 1000 records per batch, suitable for balanced performance.\n- 500: Smaller batch size for environments with limited resources or higher reliability needs.\n- 2000: Larger batch size for high-throughput scenarios where system resources allow.\n- 50: Very small batch size for testing or debugging purposes.\n\n**Important notes**\n- Excessively high values may lead to timeouts, memory exhaustion, or degraded system responsiveness.\n- Very low values can increase overhead due to more frequent batch processing cycles.\n- This parameter is critical for tuning the performance and stability of the markExported operation.\n- Changes to this value should be tested in a controlled environment before production deployment.\n\n**Dependency chain**\n- Integral to the batch processing logic within the markExported operation.\n- Interacts with system-level batch size constraints and API rate limits.\n- Influences how records are chunked and iterated during export marking.\n- May affect downstream processing components that consume batch outputs.\n\n**Technical details**\n- Must be an integer value greater than zero.\n- Typically configured via API request parameters or system configuration files.\n- Should be compatible with the data processing framework and any middleware handling batch operations.\n- May require synchronization with other batch-related settings to ensure consistency."}},"description":"markExportedBatchSize: The number of records to process in each batch when marking records as exported in NetSuite via the RESTlet API. This setting controls how many records are updated in a single API call to optimize performance and resource usage during the export marking process.\n\n**Field behavior**\n- Determines the size of each batch of records to be marked as exported in NetSuite.\n- Controls the number of records processed per RESTlet API call for export status updates.\n- Directly impacts the throughput and efficiency of the export marking operation.\n- Influences the balance between processing speed and system resource consumption.\n- Helps manage API rate limits by controlling the volume of records processed per request.\n\n**Implementation guidance**\n- Select a batch size that balances efficient processing with system stability and API constraints.\n- Avoid excessively large batch sizes to prevent API timeouts, memory exhaustion, or throttling.\n- Consider the typical volume of records to be exported and the performance characteristics of your NetSuite environment.\n- Test various batch sizes under realistic load conditions to identify the optimal value.\n- Monitor API response times and error rates to adjust the batch size dynamically if needed.\n- Ensure compatibility with any rate limiting or concurrency restrictions imposed by the NetSuite RESTlet API.\n\n**Examples**\n- Setting `markExportedBatchSize` to 100 processes 100 records per batch, suitable for moderate workloads.\n- Using a batch size of 500 may be appropriate for high-volume exports on systems with robust resources.\n- A smaller batch size like 50 can help avoid API throttling or timeouts in environments with limited resources or strict rate limits.\n- Adjusting the batch size to 200 after observing API latency improvements the overall export marking throughput.\n\n**Important notes**\n- The batch size setting directly affects the speed, reliability, and resource utilization of marking records as exported.\n- Incorrect batch size configurations can cause partial updates, failed API calls, or increased processing times.\n- This property is specific to the RESTlet-based integration with NetSuite and does not apply to other export mechanisms.\n- Changes to this setting should be tested thoroughly to avoid unintended disruptions in the export workflow.\n- Consider the impact on downstream processes that depend on timely and accurate export status updates.\n\n**Dependency chain**\n- Depends on the RESTlet API endpoint responsible for marking records as exported in NetSuite.\n- Influenced by NetSuite API rate limits, timeout settings, and system performance characteristics.\n- Works in conjunction with other export configuration parameters"},"TODO":{"type":"object","description":"TODO: A placeholder property used to indicate tasks, features, or sections within the NetSuite RESTlet integration that require implementation, completion, or further development. This property functions as a clear marker for developers and project managers to identify areas that are pending work, ensuring that these tasks are tracked and addressed before finalizing the API. It is not intended to hold any functional data or be part of the production API contract until fully implemented.\n\n**Field behavior**\n- Serves as a temporary indicator for incomplete, pending, or planned tasks within the API schema.\n- Does not contain operational data or affect API functionality until properly defined and implemented.\n- Helps track development progress and highlight areas needing attention during the development lifecycle.\n- Should be removed or replaced with finalized implementations once the associated task is completed.\n- May be used to generate reports or dashboards reflecting outstanding development work.\n\n**Implementation guidance**\n- Utilize the TODO property to explicitly flag API sections requiring further coding, configuration, or review.\n- Accompany TODO entries with detailed comments or references to issue tracking systems (e.g., JIRA, GitHub Issues) for clarity and traceability.\n- Establish regular review cycles to update, resolve, or remove TODO properties to maintain an accurate representation of development status.\n- Avoid deploying TODO properties in production environments to prevent confusion, incomplete features, or potential runtime errors.\n- Integrate TODO tracking with project management workflows to ensure timely resolution.\n\n**Examples**\n- TODO: Implement OAuth 2.0 authentication mechanism for RESTlet endpoints.\n- TODO: Add comprehensive validation rules for input parameters to ensure data integrity.\n- TODO: Complete error handling and logging for data retrieval failures.\n- TODO: Optimize response payload size for improved performance.\n- TODO: Integrate unit tests covering all new RESTlet functionalities.\n\n**Important notes**\n- The presence of TODO properties signifies incomplete or provisional functionality and should not be interpreted as finalized API features.\n- Unresolved TODO items can lead to partial implementations, unexpected behavior, or runtime errors if not addressed before release.\n- Effective management and timely resolution of TODO properties are critical for maintaining code quality, project timelines, and overall system stability.\n- TODO properties should be clearly documented and communicated within the development team to avoid oversight.\n\n**Dependency chain**\n- TODO properties may depend on other modules, services, or API components that are under development or pending integration.\n- Often linked to external project management or issue tracking tools for assignment, prioritization, and progress monitoring.\n- The"},"hooks":{"type":"object","properties":{"batchSize":{"type":"number","description":"batchSize specifies the number of records or items to be processed in a single batch during the execution of the NetSuite RESTlet hook. This parameter helps control the workload size for each batch operation, optimizing performance and resource utilization by balancing processing efficiency and system constraints.\n\n**Field behavior**\n- Determines the maximum number of records or items processed in one batch cycle.\n- Directly influences the frequency and duration of batch processing operations.\n- Helps manage memory consumption and processing time by limiting the batch workload.\n- Affects overall throughput and latency of batch operations, impacting system responsiveness.\n- Controls how data is segmented and processed in discrete units during RESTlet execution.\n\n**Implementation guidance**\n- Configure batchSize based on the system’s processing capacity, expected data volume, and performance goals.\n- Use smaller batch sizes in environments with limited resources or strict execution time limits to prevent timeouts.\n- Larger batch sizes can improve throughput by reducing the number of batch cycles but may increase individual batch processing time and risk of hitting governance limits.\n- Always validate that batchSize is a positive integer greater than zero to ensure proper operation.\n- Take into account NetSuite API governance limits, such as usage units and execution time, when determining batchSize.\n- Monitor system performance and adjust batchSize dynamically if possible to optimize processing efficiency.\n- Ensure batchSize aligns with other batch-related configurations to maintain consistency and predictable behavior.\n\n**Examples**\n- batchSize: 100 — processes 100 records per batch, balancing throughput and resource use.\n- batchSize: 500 — processes 500 records per batch for higher throughput in robust environments.\n- batchSize: 10 — processes 10 records per batch for fine-grained control and minimal resource impact.\n- batchSize: 1 — processes records individually, useful for debugging or very resource-sensitive scenarios.\n\n**Important notes**\n- Excessively high batchSize values may cause processing timeouts, exceed NetSuite governance limits, or lead to memory exhaustion.\n- Very low batchSize values can result in inefficient processing due to increased overhead and more frequent batch invocations.\n- The optimal batchSize is context-dependent and should be determined through testing and monitoring.\n- batchSize should be consistent with other batch processing parameters to avoid conflicts or unexpected behavior.\n- Changes to batchSize may require adjustments in error handling and retry logic to accommodate different batch sizes.\n\n**Dependency chain**\n- Depends on the batch processing logic implemented within the RESTlet hook.\n- Influences and is influenced by"},"preSend":{"type":"object","properties":{"fileInternalId":{"type":"string","description":"fileInternalId: The unique internal identifier assigned to a file within the NetSuite system. This identifier is used to precisely reference and manipulate a specific file during API operations, particularly within pre-send processing hooks in RESTlets. It ensures accurate targeting of file resources by uniquely identifying files stored in the NetSuite file cabinet.\n\n**Field behavior**\n- Represents a unique numeric or alphanumeric identifier assigned by NetSuite to each file.\n- Used to retrieve, update, or reference a file during the preSend hook execution.\n- Must correspond to an existing file within the NetSuite file cabinet.\n- Immutable throughout the file’s lifecycle; remains constant unless the file is deleted and recreated.\n- Serves as a key reference for file-related operations in automated workflows and integrations.\n\n**Implementation guidance**\n- Always validate that the fileInternalId exists and is accessible before performing operations.\n- Use this ID to fetch file metadata, content, or perform updates within the preSend hook.\n- Implement error handling to manage cases where the fileInternalId does not correspond to a valid or accessible file.\n- Ensure that the executing user or integration has the necessary permissions to access the file referenced by this ID.\n- Avoid hardcoding this ID; retrieve dynamically when possible to maintain flexibility and accuracy.\n\n**Examples**\n- 12345\n- \"67890\"\n- \"file_98765\"\n\n**Important notes**\n- The fileInternalId is specific to each NetSuite account and environment; it is not globally unique across different accounts.\n- Do not expose this identifier publicly, as it may reveal sensitive internal system details.\n- Modifications to the file’s name, location, or metadata do not affect the internal ID.\n- This ID is essential for linking files reliably in automated processes, integrations, and RESTlet hooks.\n- Deleting and recreating a file will result in a new fileInternalId.\n\n**Dependency chain**\n- Depends on the existence of the file within the NetSuite file cabinet.\n- Requires appropriate permissions to access or manipulate the file.\n- Utilized within preSend hooks to reference files accurately during API operations.\n\n**Technical details**\n- Typically a numeric or alphanumeric string assigned by NetSuite upon file creation.\n- Stored internally within NetSuite’s database as the primary key for file records.\n- Used as a parameter in RESTlet API calls to identify and operate on specific files.\n- Immutable identifier that does not change unless the file is deleted and recreated.\n- Integral to"},"function":{"type":"string","description":"function: >\n  Specifies the name of the custom function to be invoked during the preSend hook phase in the NetSuite RESTlet integration. This function enables developers to implement custom logic for processing or modifying the request payload immediately before it is dispatched to the NetSuite RESTlet endpoint. It serves as a critical extension point for tailoring request data, adding headers, sanitizing inputs, or performing any preparatory steps necessary to meet integration requirements.\n\n  **Field behavior**\n  - Identifies the exact function to execute during the preSend hook phase.\n  - Allows customization and transformation of the outgoing request payload or context.\n  - The specified function is called synchronously or asynchronously depending on implementation support.\n  - Modifications made by this function directly affect the data sent to the NetSuite RESTlet.\n  - Must reference a valid, accessible function within the integration’s runtime environment.\n\n  **Implementation guidance**\n  - Confirm that the function name matches a defined and exported function within the integration codebase.\n  - The function should accept the current request payload or context as input and return the modified payload or context.\n  - Implement robust error handling within the function to avoid unhandled exceptions that could disrupt the request flow.\n  - Optimize the function for performance to minimize latency in request processing.\n  - If asynchronous operations are supported, ensure proper handling of promises or callbacks.\n  - Document the function’s behavior clearly to facilitate maintenance and future updates.\n\n  **Examples**\n  - \"sanitizePayload\" — cleans and validates request data before sending.\n  - \"addAuthenticationHeaders\" — injects necessary authentication tokens or headers.\n  - \"transformRequestData\" — restructures or enriches the payload to match API expectations.\n  - \"logRequestDetails\" — captures request metadata for auditing or debugging purposes.\n\n  **Important notes**\n  - The function must be correctly implemented and accessible; otherwise, runtime errors will occur.\n  - This hook executes immediately before the request is sent, so any changes here directly impact the outgoing data.\n  - Ensure that the function’s side effects do not unintentionally alter unrelated parts of the request or integration state.\n  - If asynchronous processing is used, verify that the integration framework supports it to avoid unexpected behavior.\n  - Testing the function thoroughly is critical to ensure reliable integration behavior.\n\n  **Dependency chain**\n  - Requires the preSend hook to be enabled and properly configured in the integration settings.\n  - Depends on the presence of the named"},"configuration":{"type":"object","description":"configuration: >\n  An object containing configuration settings that influence the behavior of the preSend hook in the NetSuite RESTlet integration. This object serves as a centralized control point for customizing how requests are processed and modified before being sent to the NetSuite RESTlet endpoint. It can include a variety of parameters such as authentication credentials, logging preferences, request modification flags, timeout settings, retry policies, and feature toggles that tailor the preSend hook’s operation to specific integration needs.\n  **Field behavior**\n  - Holds key-value pairs that define how the preSend hook processes and modifies outgoing requests.\n  - Can include settings such as authentication parameters (e.g., tokens, API keys), request modification flags (e.g., header adjustments), logging options (e.g., enable/disable logging), timeout durations, retry counts, and feature toggles.\n  - Is accessed and potentially updated dynamically during the execution of the preSend hook to adapt request handling based on current context or conditions.\n  - Influences the flow and outcome of the preSend hook, potentially altering request payloads, headers, or other metadata before transmission.\n  **Implementation guidance**\n  - Define clear, descriptive, and consistent keys within the configuration object to avoid ambiguity and ensure maintainability.\n  - Validate all configuration values rigorously before applying them to prevent runtime errors or unexpected behavior.\n  - Use this object to centralize control over preSend hook behavior, enabling easier updates, debugging, and feature management.\n  - Document all possible configuration options, their expected data types, default values, and their specific effects on the preSend hook’s operation.\n  - Ensure sensitive information within the configuration (e.g., authentication tokens) is handled securely, following best practices for encryption and access control.\n  - Consider versioning the configuration schema if multiple versions of the preSend hook or integration exist.\n  **Examples**\n  - `{ \"enableLogging\": true, \"authToken\": \"abc123\", \"modifyHeaders\": false }`\n  - `{ \"retryCount\": 3, \"timeout\": 5000 }`\n  - `{ \"useNonProduction\": true, \"customHeader\": \"X-Custom-Value\" }`\n  - `{ \"authenticationType\": \"OAuth2\", \"refreshToken\": \"xyz789\", \"logLevel\": \"verbose\" }`\n  - `{ \"enableCaching\": false, \"maxRetries\": 5, \"requestPriority\": \"high\" }`\n  **Important**"}},"description":"preSend is a hook function that is executed immediately before a RESTlet sends a response back to the client. It allows for last-minute modifications or logging of the response data, enabling customization of the output or performing additional processing steps prior to transmission. This hook provides a critical interception point to ensure the response adheres to business rules, compliance requirements, or client-specific formatting before it leaves the server.\n\n**Field behavior**\n- Invoked right before the RESTlet response is sent to the client.\n- Receives the response data as input and can modify it.\n- Can be used to log, audit, or transform the response payload.\n- Should return the final response object to be sent.\n- Supports both synchronous and asynchronous execution depending on the implementation.\n- Any changes made here directly impact the final output received by the client.\n\n**Implementation guidance**\n- Implement as a synchronous or asynchronous function depending on the environment and use case.\n- Ensure any modifications maintain the expected response format and data integrity.\n- Avoid long-running or blocking operations to prevent delaying the response delivery.\n- Handle errors gracefully within the hook to prevent disrupting the overall RESTlet response flow.\n- Validate the modified response to ensure it complies with API schema and client expectations.\n- Use this hook to enforce security measures such as masking sensitive data or adding audit trails.\n\n**Examples**\n- Adding a timestamp or metadata (e.g., request ID, processing duration) to the response object.\n- Masking or removing sensitive information (e.g., personal identifiers, confidential fields) from the response.\n- Logging response details for auditing or debugging purposes.\n- Transforming response data structure or formatting to match client-specific requirements.\n- Injecting additional headers or status information into the response payload.\n\n**Important notes**\n- This hook runs after all business logic but before the response is finalized and sent.\n- Modifications here directly affect what the client ultimately receives.\n- Errors thrown in this hook may cause the RESTlet to fail or return an error response.\n- Use this hook to enforce response-level policies, compliance, or data governance rules.\n- Avoid introducing side effects that could alter the idempotency or consistency of the response.\n- Testing this hook thoroughly is critical to ensure it does not unintentionally break client integrations.\n\n**Dependency chain**\n- Triggered after the main RESTlet processing logic completes and the response object is prepared.\n- Precedes the actual sending of the HTTP response to the client.\n- May depend on prior hooks or"}},"description":"hooks: >\n  An array of hook definitions that specify custom functions to be executed at various points during the lifecycle of the RESTlet script in NetSuite. These hooks enable developers to inject additional logic before or after standard processing events, allowing for extensive customization and extension of the RESTlet's behavior to meet specific business requirements.\n\n  **Field behavior**\n  - Defines one or more hooks that trigger custom code execution at designated lifecycle events.\n  - Hooks can be configured to run at standard lifecycle events such as beforeLoad, beforeSubmit, afterSubmit, or at custom-defined events tailored to specific needs.\n  - Each hook entry typically includes the event name, the callback function to execute, and optional parameters or context information.\n  - Supports both synchronous and asynchronous execution modes depending on the hook type and implementation context.\n  - Hooks execute in the order they are defined, allowing for controlled sequencing of custom logic.\n  - Hooks can modify input data, perform validations, log information, or alter output responses as needed.\n\n  **Implementation guidance**\n  - Ensure that each hook function is properly defined, accessible, and tested within the RESTlet script context to avoid runtime failures.\n  - Validate hook event names against the list of supported lifecycle events to prevent misconfiguration and errors.\n  - Use hooks to encapsulate reusable business logic, enforce data integrity, or integrate with external systems and services.\n  - Implement robust error handling within hook functions to prevent exceptions from disrupting the main RESTlet processing flow.\n  - Document each hook’s purpose, expected inputs, outputs, and side effects clearly to facilitate maintainability and future enhancements.\n  - Consider performance implications of hooks, especially those performing asynchronous operations or external calls, to maintain RESTlet responsiveness.\n  - When multiple hooks are defined for the same event, design them to avoid conflicts and ensure predictable outcomes.\n\n  **Examples**\n  - Defining a hook to validate and sanitize input data before processing a RESTlet request.\n  - Adding a hook to log detailed request and response information after the RESTlet completes execution for auditing purposes.\n  - Using a hook to modify or enrich the response payload dynamically before it is returned to the client application.\n  - Implementing a hook to trigger notifications or update related records asynchronously after data submission.\n  - Creating a custom hook event to perform additional security checks beyond standard validation.\n\n  **Important notes**\n  - Improper use or misconfiguration of hooks can lead to unexpected behavior, performance degradation, or runtime errors."},"cLocked":{"type":"object","description":"cLocked indicates whether the record is locked, preventing any modifications to its data.\n**Field behavior**\n- Represents the lock status of a record within the system.\n- When set to true, the record is locked and cannot be edited or updated.\n- When set to false, the record is unlocked and available for modifications.\n- Typically used to control concurrent access and maintain data integrity.\n**Implementation guidance**\n- Should be checked before performing update or delete operations on the record.\n- Setting this field to true should trigger UI or API restrictions on editing.\n- Ensure that only authorized users or processes can change the lock status.\n- Use this field to prevent race conditions or accidental data overwrites.\n**Examples**\n- cLocked: true — The record is locked and read-only.\n- cLocked: false — The record is unlocked and editable.\n**Important notes**\n- Locking a record does not necessarily prevent read access; it only restricts modifications.\n- The lock status may be temporary or permanent depending on business rules.\n- Changes to this field might require audit logging for compliance.\n**Dependency chain**\n- May depend on user permissions or roles to set or clear the lock.\n- Could be related to workflow states or approval processes that enforce locking.\n**Technical details**\n- Data type: Boolean.\n- Default value is typically false (unlocked).\n- Stored as a flag in the record metadata or status fields.\n- Changes to cLocked should be atomic to avoid inconsistent states."}},"description":"restlet: The identifier or URL of the NetSuite Restlet script to be invoked for performing custom server-side logic or data processing within the NetSuite environment. This property specifies which Restlet endpoint the integration or application should call to execute specific business logic, automate workflows, or retrieve and manipulate data dynamically. It can be represented as an internal script ID, a relative URL path, or a full external URL depending on the integration scenario and access method.\n\n**Field behavior**\n- Defines the specific target Restlet script or endpoint for API calls within the NetSuite environment.\n- Routes requests to custom server-side scripts developed using NetSuite’s SuiteScript framework.\n- Enables execution of tailored business processes, data validations, transformations, or integrations.\n- Supports various HTTP methods such as GET, POST, PUT, and DELETE depending on the Restlet’s implementation.\n- Can be specified as a script ID, a relative URL path, or a fully qualified URL based on deployment and access context.\n- Acts as the primary entry point for invoking custom logic that extends or complements standard NetSuite functionality.\n\n**Implementation guidance**\n- Confirm that the Restlet script is properly deployed, enabled, and accessible within the target NetSuite account.\n- Use the internal script ID format (e.g., \"customscript_my_restlet\") when calling via SuiteScript or internal APIs.\n- Use the relative URL path (e.g., \"/app/site/hosting/restlet.nl?script=123&deploy=1\") or full URL for external integrations or REST clients.\n- Verify that the Restlet supports the required HTTP methods and handles input/output data formats correctly (JSON, XML, etc.).\n- Secure the Restlet endpoint by implementing authentication mechanisms such as OAuth 2.0, token-based authentication, or NetSuite session credentials.\n- Implement robust error handling and retry logic to manage scenarios where the Restlet is unavailable or returns errors.\n- Test the Restlet thoroughly in a non-production environment before deploying to production to ensure expected behavior and security compliance.\n\n**Examples**\n- \"customscript_my_restlet\" (internal script ID used in SuiteScript calls)\n- \"/app/site/hosting/restlet.nl?script=123&deploy=1\" (relative URL for REST calls within NetSuite)\n- \"https://rest.netsuite.com/app/site/hosting/restlet.nl?script=456&deploy=2\" (full external URL for third-party integrations)\n- \"customscript_sales_order_processor\" (a Rest"},"distributed":{"type":"object","properties":{"recordType":{"type":"string","description":"The lowercase script ID of the NetSuite record type for the distributed export.\n\nMust be the exact lowercase script ID as defined in NetSuite (e.g., \"customer\", \"salesorder\", \"invoice\", \"vendorbill\").\nThis is NOT the display name - use the script ID which is always lowercase with no spaces.\n\n**Examples**\n- \"customer\"\n- \"invoice\"\n- \"salesorder\"\n- \"itemfulfillment\"\n- \"vendorbill\"\n- \"employee\"\n- \"purchaseorder\"\n- \"creditmemo\"\n\n**Important notes**\n- Must be lowercase script ID, not the display name\n- Custom record types use format \"customrecord_scriptid\""},"executionContext":{"type":"array","description":"An array of execution contexts that will trigger this distributed export.\n\nSpecifies which NetSuite execution contexts should trigger this export. When a record change occurs in one of the specified contexts, the export will be triggered.\n\n**Default value**\nIf not specified, defaults to: [\"userinterface\", \"webstore\"]\n\n**Valid values**\n- \"userinterface\" - User interactions in the NetSuite UI\n- \"webservices\" - SOAP web services calls\n- \"csvimport\" - CSV import operations\n- \"offlineclient\" - Offline client synchronization\n- \"portlet\" - Portlet interactions\n- \"scheduled\" - Scheduled script executions\n- \"suitelet\" - Suitelet executions\n- \"custommassupdate\" - Custom mass update operations\n- \"workflow\" - Workflow actions\n- \"webstore\" - Web store transactions\n- \"userevent\" - User event script triggers\n- \"mapreduce\" - Map/Reduce script operations\n- \"restlet\" - RESTlet API calls\n- \"webapplication\" - Web application interactions\n- \"restwebservices\" - REST web services calls\n\n**Example**\n```json\n[\"userinterface\", \"webstore\"]\n```","default":["userinterface","webstore"],"items":{"type":"string","enum":["userinterface","webservices","csvimport","offlineclient","portlet","scheduled","suitelet","custommassupdate","workflow","webstore","userevent","mapreduce","restlet","webapplication","restwebservices"]}},"disabled":{"type":"boolean","description":"disabled: Indicates whether the distributed feature in NetSuite is disabled or not. This boolean flag controls the availability and operational status of the distributed functionalities within the NetSuite integration, allowing administrators or systems to enable or disable these features as needed.\n\n**Field behavior**\n- When set to true, the distributed feature is fully disabled, preventing any distributed operations or workflows from executing.\n- When set to false or omitted, the distributed feature remains enabled and fully operational.\n- Acts as a toggle switch to control the accessibility of distributed capabilities within the NetSuite environment.\n- Changes to this flag directly influence the behavior of distributed-related processes and integrations.\n\n**Implementation guidance**\n- Use a boolean value: `true` to disable the distributed feature, `false` to enable it.\n- Before disabling, verify that no critical processes depend on distributed functionality to avoid disruptions.\n- Implement validation checks to confirm the current state before initiating distributed operations.\n- Provide clear user notifications or system logs when the feature is disabled to aid in troubleshooting and auditing.\n- Consider the impact on dependent modules and ensure coordinated updates if disabling this feature.\n\n**Examples**\n- `disabled: true`  # The distributed feature is turned off, disabling all related operations.\n- `disabled: false` # The distributed feature is active and available for use.\n- Omitted `disabled` property defaults to `false`, enabling the feature by default.\n\n**Important notes**\n- Disabling this feature may interrupt workflows or processes that rely on distributed capabilities, potentially causing failures or delays.\n- Some systems may require a restart or reinitialization after changing this setting for the change to take full effect.\n- Modifying this property should be restricted to users with appropriate permissions to prevent unauthorized disruptions.\n- Always assess the broader impact on the NetSuite integration before toggling this flag.\n\n**Dependency chain**\n- Directly affects modules and properties that rely on distributed functionality within the NetSuite integration.\n- Should be checked and respected by any API calls, workflows, or processes that involve distributed features.\n- May influence error handling and fallback mechanisms in distributed-related operations.\n\n**Technical details**\n- Data type: Boolean\n- Default value: `false` (distributed feature enabled)\n- Located under the `netsuite.distributed` namespace in the API schema\n- Changing this property triggers state changes in distributed feature availability within the system"},"executionType":{"type":"array","description":"An array of record operation types that will trigger this distributed export.\n\nSpecifies which types of record operations should trigger the export. When a record operation matches one of the specified types, the export will be triggered.\n\n**Default value**\nIf not specified, defaults to: [\"create\", \"edit\", \"xedit\"]\n\n**Valid values**\n- \"create\" - New record creation\n- \"edit\" - Record editing via UI\n- \"delete\" - Record deletion\n- \"xedit\" - Inline editing (edit without opening the record)\n- \"copy\" - Record copy operation\n- \"view\" - Record view\n- \"cancel\" - Transaction cancellation\n- \"approve\" - Approval action\n- \"reject\" - Rejection action\n- \"pack\" - Pack operation (fulfillment)\n- \"ship\" - Ship operation (fulfillment)\n- \"markcomplete\" - Mark as complete\n- \"reassign\" - Reassignment action\n- \"editforecast\" - Forecast editing\n- \"dropship\" - Drop ship operation\n- \"specialorder\" - Special order operation\n- \"orderitems\" - Order items action\n- \"paybills\" - Pay bills action\n- \"print\" - Print action\n- \"email\" - Email action\n\n**Example**\n```json\n[\"create\", \"edit\", \"xedit\"]\n```","default":["create","edit","xedit"],"items":{"type":"string","enum":["create","edit","delete","xedit","copy","view","cancel","approve","reject","pack","ship","markcomplete","reassign","editforecast","dropship","specialorder","orderitems","paybills","print","email"]}},"qualifier":{"type":"object","description":"qualifier: A string value used to specify a particular qualifier or modifier that further defines, categorizes, or scopes the associated data within the NetSuite distributed context. This property enables more granular identification, filtering, and processing of data by applying specific criteria or attributes relevant to business logic, integration workflows, or operational requirements. It serves as an optional but powerful tool to distinguish data subsets, enhance data semantics, and support conditional handling in distributed NetSuite environments.\n\n**Field behavior**\n- Acts as an additional identifier or modifier to refine the meaning, scope, or classification of the associated data.\n- Enables filtering, categorization, or qualification of data entries in distributed NetSuite operations based on specific business rules.\n- Typically optional but may be mandatory in certain contexts or API endpoints where precise data segmentation is required.\n- Accepts string values that correspond to predefined, standardized, or custom qualifiers recognized by the system or integration layer.\n- Supports multiple use cases including regional segmentation, priority tagging, type classification, and channel identification.\n\n**Implementation guidance**\n- Ensure the qualifier value strictly aligns with the accepted set of qualifiers defined in the business domain, integration specifications, or system configuration.\n- Implement validation mechanisms to verify that the qualifier string matches allowed or expected values to avoid errors, misclassification, or unintended behavior.\n- Adopt consistent naming conventions and formatting standards (e.g., lowercase, hyphen-separated) for qualifiers to maintain clarity, readability, and interoperability across systems.\n- Maintain comprehensive documentation of all custom and standard qualifiers used, including their intended meaning and usage scenarios, to facilitate maintenance, troubleshooting, and future integrations.\n- Consider the impact of qualifiers on downstream processing, reporting, and analytics to ensure they are leveraged effectively and do not introduce ambiguity.\n\n**Examples**\n- \"region-us\" to specify data related to the United States region.\n- \"priority-high\" to indicate transactions or records with high priority status.\n- \"type-inventory\" to qualify records associated with inventory management.\n- \"channel-online\" to denote sales or operations conducted through online channels.\n- \"segment-enterprise\" to classify data pertaining to enterprise-level customers.\n- \"status-active\" to filter or identify active records within a dataset.\n\n**Important notes**\n- The qualifier should be meaningful, contextually relevant, and aligned with the business logic to ensure accurate data interpretation.\n- Incorrect, inconsistent, or ambiguous qualifiers can lead to data misinterpretation, processing errors, or integration failures.\n- The property may interact with other filtering"},"skipExportFieldId":{"type":"string","description":"skipExportFieldId is an identifier for a specific field within the NetSuite distributed configuration that determines whether certain data should be excluded from export processes. It serves as a control mechanism to selectively omit data associated with particular fields during export operations, enabling tailored and efficient data handling.\n\n**Field behavior:**  \n- Acts as a flag or marker to skip exporting data associated with the specified field ID.  \n- When set, the export routines will omit the data linked to this field from being included in the export payload.  \n- Helps control and customize the export behavior on a per-field basis within distributed NetSuite configurations.  \n- Does not affect data visibility or storage within NetSuite; it only influences export output.  \n- Supports multiple uses in scenarios where sensitive, redundant, or irrelevant data should be excluded from exports.\n\n**Implementation guidance:**  \n- Ensure the field ID provided corresponds to a valid and existing field within the NetSuite schema to prevent export errors.  \n- Use this property to optimize export operations by excluding unnecessary or sensitive data fields, improving performance and compliance.  \n- Validate the field ID format and existence before applying it to avoid runtime issues during export.  \n- Integrate with export logic to check this property before including fields in the export output, ensuring consistent behavior.  \n- Consider maintaining a centralized list or configuration of skipExportFieldIds for easier management and auditing.  \n\n**Examples:**  \n- skipExportFieldId: \"custbody_internal_notes\" (skips exporting the internal notes custom field)  \n- skipExportFieldId: \"item_custom_field_123\" (excludes a specific item custom field from export)  \n- skipExportFieldId: \"custentity_sensitive_data\" (prevents export of sensitive customer entity data)  \n\n**Important notes:**  \n- This property only affects export operations and does not alter data storage or visibility within NetSuite.  \n- Misconfiguration may lead to incomplete data exports if critical fields are skipped unintentionally, potentially impacting downstream processes.  \n- Should be used judiciously to maintain data integrity and compliance with business rules and regulatory requirements.  \n- Changes to this property should be documented and reviewed to avoid unintended data omissions.  \n\n**Dependency chain**\n- Depends on the existence and validity of the specified field ID within the NetSuite schema.  \n- Relies on export routines to check and respect this property during data export processes.  \n- May interact with other export configuration settings that control data inclusion/exclusion."},"hooks":{"type":"object","properties":{"preSend":{"type":"object","properties":{"fileInternalId":{"type":"string","description":"fileInternalId: The unique internal identifier assigned to a file within the NetSuite system. This identifier is essential for accurately referencing and manipulating a specific file during various operations such as retrieval, update, or deletion within NetSuite's environment. It acts as a primary key that ensures precise targeting of files in automated workflows, scripts, and API calls.\n\n**Field behavior**\n- Serves as a unique and immutable key to identify a file in the NetSuite file cabinet.\n- Utilized in pre-send hooks and other automation points to specify the exact file being processed or referenced.\n- Must correspond to an existing file’s internal ID within the NetSuite account to ensure valid operations.\n- Enables consistent and reliable file operations by linking actions directly to the file’s system-assigned identifier.\n\n**Implementation guidance**\n- Always ensure the value is a valid integer that corresponds to an existing file’s internal ID in NetSuite.\n- Validate the internal ID before performing any file operations to avoid runtime errors or failed transactions.\n- Use this ID when invoking NetSuite APIs, SuiteScript, or other integration points to fetch, update, or delete files.\n- Avoid hardcoding the internal ID; instead, dynamically retrieve it through queries or API calls to maintain adaptability and reduce maintenance overhead.\n- Handle exceptions gracefully when the ID does not correspond to any file, providing meaningful error messages or fallback logic.\n\n**Examples**\n- 12345\n- 987654\n- 1001\n\n**Important notes**\n- The internal ID is system-generated by NetSuite and guaranteed to be unique within the account.\n- This ID is distinct from file names, external URLs, or folder identifiers and should not be confused with them.\n- Using an incorrect or non-existent internal ID will cause operations to fail, potentially interrupting workflows.\n- The internal ID remains constant for the lifetime of the file and does not change even if the file is moved or renamed.\n\n**Dependency chain**\n- Depends on the file existing in the NetSuite file cabinet prior to referencing.\n- Often used alongside related properties such as file name, folder ID, file type, or metadata to provide context or additional filtering.\n- May be required input for downstream processes that manipulate or validate file contents.\n\n**Technical details**\n- Represented as an integer value assigned by NetSuite upon file creation.\n- Immutable once assigned; cannot be altered or reassigned to a different file.\n- Used internally by NetSuite APIs, SuiteScript, and integration"},"function":{"type":"string","description":"function: >\n  Specifies the name of the custom function to be executed as a pre-send hook within the NetSuite distributed system. This function is invoked immediately before sending data or requests, allowing for custom processing, validation, or modification of the payload to ensure data integrity and compliance with business rules.\n\n  **Field behavior**\n  - Defines the exact function to be called prior to sending data or requests.\n  - Enables interception, inspection, and manipulation of data before transmission.\n  - Supports integration of custom business logic, validation, enrichment, or logging steps.\n  - Must reference a valid, accessible function within the current execution context or environment.\n  - The function’s execution outcome can influence whether the sending process proceeds, is modified, or is aborted.\n\n  **Implementation guidance**\n  - Ensure the function name corresponds exactly to a defined function in the codebase, script environment, or registered hooks.\n  - The function should accept the expected input parameters (such as the payload or context) and return appropriate results or modifications.\n  - Implement robust error handling within the function to prevent unhandled exceptions that could disrupt the sending workflow.\n  - Document the function’s purpose, input/output contract, and side effects clearly for maintainability and future reference.\n  - Validate that the function executes efficiently and completes promptly to avoid introducing latency or blocking the sending process.\n  - If asynchronous operations are necessary, ensure they are properly awaited or handled to guarantee completion before sending.\n  - Follow consistent naming conventions aligned with the overall codebase or organizational standards.\n\n  **Examples**\n  - \"validateCustomerData\"\n  - \"sanitizePayloadBeforeSend\"\n  - \"logPreSendActivity\"\n  - \"customAuthorizationCheck\"\n  - \"enrichOrderDetails\"\n  - \"checkInventoryAvailability\"\n\n  **Important notes**\n  - The function must be synchronous or correctly handle asynchronous behavior to ensure it completes before the send operation proceeds.\n  - If the function throws an error or returns a failure state, it may block, modify, or abort the sending process depending on the implementation.\n  - Avoid performing long-running or blocking operations within the function to maintain system responsiveness.\n  - The function should not perform irreversible side effects unless explicitly intended, as it runs prior to data transmission.\n  - Ensure the function does not introduce security vulnerabilities, such as exposing sensitive data or allowing injection attacks.\n  - Consistent and clear error reporting within the function aids in troubleshooting and operational monitoring.\n\n  **Dependency**"},"configuration":{"type":"object","description":"configuration: >\n  Configuration settings for the preSend hook in the NetSuite distributed system.\n  This property defines the parameters and options that control the behavior and execution of the preSend hook, allowing customization of how data is processed before being sent.\n  It enables fine-tuning of operational aspects such as retries, timeouts, validation rules, logging, and payload constraints to ensure reliable and efficient data transmission.\n  **Field behavior**\n  - Specifies the customizable settings that dictate how the preSend hook operates.\n  - Controls data manipulation, validation, and preparation steps prior to sending.\n  - Can include flags, thresholds, retry policies, timeout durations, logging options, and other operational parameters.\n  - May be optional or mandatory depending on the specific implementation and requirements of the preSend hook.\n  - Supports nested configuration objects to allow detailed and structured settings.\n  **Implementation guidance**\n  - Define clear, well-documented configuration options that directly impact the preSend process.\n  - Validate all configuration values rigorously to ensure they conform to expected data types, ranges, and formats.\n  - Provide sensible default values for optional parameters to enhance usability and reduce configuration errors.\n  - Ensure backward compatibility when extending or modifying configuration options.\n  - Include comprehensive documentation for each configuration parameter, including its purpose, accepted values, and effect on hook behavior.\n  - Consider security implications when allowing configuration of headers or other sensitive parameters.\n  **Examples**\n  - `{ \"retryCount\": 3, \"timeout\": 5000, \"enableLogging\": true }`\n  - `{ \"validateSchema\": true, \"maxPayloadSize\": 1048576 }`\n  - `{ \"customHeaders\": { \"X-Custom-Header\": \"value\" } }`\n  - `{ \"retryPolicy\": { \"maxAttempts\": 5, \"backoffStrategy\": \"exponential\" }, \"enableLogging\": false }`\n  - `{ \"payloadCompression\": \"gzip\", \"timeout\": 10000 }`\n  **Important notes**\n  - Incorrect or invalid configuration values can cause the preSend hook to fail or behave unpredictably.\n  - Thorough testing of configuration changes in development or staging environments is critical before deploying to production.\n  - Some configuration changes may require restarting or reinitializing the hook or related services to take effect.\n  - Sensitive configuration parameters should be handled securely to prevent exposure of confidential information.\n  - Configuration should be version-controlled and documented to facilitate maintenance"}},"description":"preSend is a hook function that is invoked immediately before a request is sent to the NetSuite API. It allows for custom processing, modification, or validation of the request payload and headers, enabling dynamic adjustments or logging prior to transmission.\n\n**Field behavior**\n- Executed synchronously or asynchronously just before the API request is dispatched to the NetSuite endpoint.\n- Receives the full request object, including headers, body, query parameters, and other relevant metadata.\n- Permits modification of any part of the request, such as altering headers, adjusting the payload, or changing query parameters.\n- Supports validation logic to ensure the request meets required criteria; throwing an error will abort the request.\n- Enables injection of dynamic data like authentication tokens, custom headers, or correlation IDs.\n- Can be used for logging or auditing outgoing request details for debugging or monitoring purposes.\n\n**Implementation guidance**\n- Implement as a function or asynchronous callback that accepts the request context object.\n- Ensure that any asynchronous operations within the hook are properly awaited to maintain request integrity.\n- Keep processing lightweight to avoid introducing latency or blocking the request pipeline.\n- Handle exceptions carefully; unhandled errors will prevent the request from being sent.\n- Centralize request customization logic here to improve maintainability and reduce duplication.\n- Avoid side effects that could impact other parts of the system or subsequent requests.\n- Validate inputs thoroughly to prevent malformed requests from being sent to the API.\n\n**Examples**\n- Adding a Bearer token or API key to the Authorization header dynamically before sending.\n- Logging the complete request payload and headers for troubleshooting network issues.\n- Modifying request parameters based on user roles or feature flags at runtime.\n- Validating that required fields are present and correctly formatted, throwing an error if validation fails.\n- Adding a unique request ID header for tracing requests across distributed systems.\n\n**Important notes**\n- This hook executes on every outgoing request, so its performance impact should be minimized.\n- Any modifications made within preSend directly affect the final request sent to NetSuite.\n- Throwing an error inside this hook will abort the request and propagate the error upstream.\n- This hook is strictly for pre-request processing and should not be used for handling responses.\n- Avoid making network calls or heavy computations inside this hook to prevent delays.\n- Ensure thread safety if the hook accesses shared resources or global state.\n\n**Dependency chain**\n- Invoked after request construction but before the request is dispatched.\n- Precedes any network transmission or retry"}},"description":"hooks: >\n  A collection of user-defined functions or callbacks that are executed at specific points during the lifecycle of the distributed process within the NetSuite integration. These hooks enable customization and extension of the default behavior by injecting custom logic before, during, or after key operations, allowing for flexible adaptation to unique business requirements and integration scenarios.\n\n  **Field behavior**\n  - Contains one or more functions or callback references mapped to specific lifecycle events.\n  - Each hook corresponds to a distinct event or stage in the distributed process, such as pre-processing, post-processing, error handling, or data transformation.\n  - Hooks are invoked automatically by the system at predefined points in the workflow.\n  - Can modify data payloads, trigger additional workflows or external API calls, perform validations, or handle errors.\n  - Supports both synchronous and asynchronous execution models depending on the hook’s purpose and implementation.\n  - Execution order of hooks for the same event is deterministic and should be documented.\n  - Hooks should be designed to avoid side effects that could impact other parts of the process.\n\n  **Implementation guidance**\n  - Define hooks as named functions or references to executable code blocks compatible with the integration environment.\n  - Ensure hooks are idempotent to prevent unintended consequences from repeated or retried executions.\n  - Validate all inputs and outputs rigorously within hooks to maintain data integrity and system stability.\n  - Use hooks to integrate with external systems, perform custom validations, enrich data, or implement business-specific logic.\n  - Document each hook’s purpose, expected inputs, outputs, and any side effects clearly for maintainability.\n  - Implement robust error handling within hooks to gracefully manage exceptions without disrupting the main process flow.\n  - Test hooks thoroughly in isolated and integrated environments to ensure reliability and performance.\n  - Consider security implications, such as data exposure or injection risks, when implementing hooks.\n\n  **Examples**\n  - A hook that validates transaction data before it is sent to NetSuite to ensure compliance with business rules.\n  - A hook that logs detailed transaction metadata after a successful operation for auditing purposes.\n  - A hook that modifies or enriches payload data during transformation stages to align with NetSuite’s schema.\n  - A hook that triggers email or system notifications upon error occurrences to alert support teams.\n  - A hook that retries failed operations with exponential backoff to improve resilience.\n\n  **Important notes**\n  - Improperly implemented hooks can cause process failures, data inconsistencies, or performance degradation."},"sublists":{"type":"object","description":"sublists: >\n  A collection of related sublist objects associated with the main record, representing grouped sets of data entries that provide additional details or linked information within the NetSuite distributed record context. These sublists enable the organization of complex, hierarchical data by encapsulating related records or line items that belong to the primary record, facilitating detailed data management and interaction within the system.\n  **Field behavior**\n  - Contains multiple sublist entries, each representing a distinct group of related data tied to the main record.\n  - Organizes and structures complex record information into manageable, logically grouped sections.\n  - Supports nested and hierarchical data representation, allowing for detailed and granular record composition.\n  - Typically handled as arrays or lists of sublist objects, supporting iteration and manipulation.\n  - Reflects one-to-many relationships inherent in NetSuite records, such as line items or related entities.\n  **Implementation guidance**\n  - Ensure each sublist object strictly adheres to the defined schema and data types for its specific sublist type.\n  - Validate the consistency and referential integrity of sublist data in relation to the main record to prevent data anomalies.\n  - Design for dynamic handling of sublists, accommodating varying sizes including empty or large collections.\n  - Implement robust CRUD (Create, Read, Update, Delete) operations for sublist entries to maintain accurate and up-to-date data.\n  - Consider transactional integrity when modifying sublists to ensure changes are atomic and consistent.\n  **Examples**\n  - A sales order record containing sublists for item lines detailing products, quantities, and prices; shipping addresses specifying delivery locations; and payment schedules outlining installment plans.\n  - An employee record with sublists for dependents listing family members; employment history capturing previous roles and durations; and certifications documenting professional qualifications.\n  - A customer record including sublists for contacts with communication details; transactions recording purchase history; and communication logs tracking interactions and notes.\n  **Important notes**\n  - Sublists are critical for accurately modeling one-to-many relationships within NetSuite records, enabling detailed data capture and reporting.\n  - Modifications to sublists can trigger business logic, workflows, or validations that affect overall record processing.\n  - Maintaining synchronization between sublists and the main record is essential to preserve data integrity and prevent inconsistencies.\n  - Performance considerations should be taken into account when handling large sublists to optimize system responsiveness.\n  **Dependency chain**\n  - Depends on the main record schema"},"referencedFields":{"type":"object","description":"referencedFields: >\n  A list of field identifiers that are referenced within the current context, typically used to denote dependencies or relationships between fields in a NetSuite distributed environment. This property helps in mapping out how different fields interact or rely on each other, facilitating data integrity, validation, and synchronization across distributed components or services.\n  **Field behavior**\n  - Contains identifiers of fields that the current field or process depends on or interacts with.\n  - Used to establish explicit relationships or dependencies between multiple fields.\n  - Enables tracking of data flow and ensures consistency across distributed systems.\n  - Supports dynamic resolution of dependencies during runtime or configuration.\n  **Implementation guidance**\n  - Populate with valid and existing field identifiers as defined in the NetSuite schema or metadata.\n  - Verify that all referenced fields are accessible and correctly scoped within the current context.\n  - Use this property to manage dependencies critical for data validation, synchronization, or processing logic.\n  - Keep the list updated to reflect any schema changes to avoid broken references or inconsistencies.\n  - Avoid circular references by carefully managing dependencies between fields.\n  **Examples**\n  - [\"customerId\", \"orderDate\", \"shippingAddress\"]\n  - [\"invoiceNumber\", \"paymentStatus\"]\n  - [\"productCode\", \"inventoryLevel\", \"reorderThreshold\"]\n  **Important notes**\n  - Referenced fields must be unique within the list to prevent redundancy and confusion.\n  - Modifications to referenced fields can impact dependent processes; changes should be tested thoroughly.\n  - This property contains only the identifiers (names or keys) of fields, not their actual data or values.\n  - Proper documentation of referenced fields improves maintainability and clarity of dependencies.\n  **Dependency chain**\n  - Often linked with fields that require validation or data aggregation from other fields.\n  - May influence or be influenced by business rules, workflows, or automation scripts that depend on multiple fields.\n  - Changes in referenced fields can cascade to affect dependent fields or processes.\n  **Technical details**\n  - Data type: Array of strings.\n  - Each string represents a unique field identifier within the NetSuite distributed environment.\n  - The array should be serialized in a format compatible with the consuming system (e.g., JSON array).\n  - Maximum length and allowed characters for field identifiers should conform to NetSuite naming conventions."},"relatedLists":{"type":"object","description":"relatedLists: A collection of related list objects that represent associated records or entities linked to the primary record within the NetSuite distributed data model. These related lists provide contextual information and enable navigation to connected data, facilitating comprehensive data retrieval and management. Each related list encapsulates a set of records that share a defined relationship with the primary record, such as transactions, contacts, or custom entities, thereby supporting a holistic view of the data ecosystem.\n\n**Field behavior**\n- Contains multiple related list entries, each representing a distinct association to the primary record.\n- Enables retrieval of linked records such as transactions, custom records, subsidiary data, or other relevant entities.\n- Supports hierarchical or relational data structures by referencing related entities, allowing nested or multi-level associations.\n- Typically read-only in the context of distributed data retrieval but may support updates or synchronization depending on API capabilities and permissions.\n- May include metadata such as record counts, last updated timestamps, or status indicators for each related list.\n- Supports dynamic inclusion or exclusion based on user permissions, record type, and system configuration.\n\n**Implementation guidance**\n- Populate with relevant related list objects that are directly associated with the primary record, ensuring accurate representation of relationships.\n- Ensure each related list entry includes unique identifiers, descriptive metadata, and navigation links or references necessary for data access and traversal.\n- Maintain consistency in naming conventions, data structures, and field formats to align with NetSuite’s standard data model and API specifications.\n- Implement pagination, filtering, or sorting mechanisms to efficiently handle large sets of related records within each list.\n- Validate all references and links to ensure data integrity, preventing broken or stale connections within the distributed data environment.\n- Consider caching strategies or incremental updates to optimize performance when dealing with frequently accessed related lists.\n- Respect and enforce access control and permission checks to ensure users only see related lists they are authorized to access.\n\n**Examples**\n- A customer record’s relatedLists might include “Transactions” (e.g., sales orders, invoices), “Contacts” (associated individuals), and “Cases” (customer support tickets).\n- An invoice record’s relatedLists could contain “Payments” (payment records), “Shipments” (delivery details), and “Adjustments” (billing corrections).\n- A custom record type might have relatedLists such as “Attachments” (files linked to the record) or “Notes” (user comments or annotations).\n- A vendor record’s relatedLists may include “Purchase Orders,” “Bills,” and “Vendor Contacts"},"forceReload":{"type":"boolean","description":"forceReload: Indicates whether the system should forcibly reload the data or configuration, bypassing any cached or stored versions to ensure the most up-to-date information is used. This flag is critical in scenarios where data accuracy and freshness are paramount, such as after configuration changes or data updates that must be immediately reflected.\n\n**Field behavior:**\n- When set to true, the system bypasses all caches and reloads data or configurations directly from the primary source, ensuring the latest state is retrieved.\n- When set to false or omitted, the system may utilize cached or previously stored data to optimize performance and reduce load times.\n- Primarily used in contexts where stale data could lead to errors, inconsistencies, or outdated processing results.\n- The reload operation triggered by this flag typically involves invalidating caches and refreshing dependent components or services.\n\n**Implementation guidance:**\n- Use this flag judiciously to balance between data freshness and system performance, avoiding unnecessary reloads that could degrade responsiveness.\n- Ensure that enabling forceReload initiates a comprehensive refresh cycle, including clearing relevant caches and reinitializing configuration or data layers.\n- Implement robust error handling during the reload process to manage potential failures without causing system downtime or inconsistent states.\n- Monitor system resource utilization and response times when forceReload is active to identify and mitigate performance bottlenecks.\n- Document scenarios and triggers for using forceReload to guide developers and operators in its appropriate application.\n\n**Examples:**\n- forceReload: true  \n  (forces the system to bypass caches and reload data/configuration from the authoritative source immediately)\n- forceReload: false  \n  (allows the system to serve data from cache if available, improving response time)\n- forceReload omitted  \n  (defaults to false behavior, relying on cached data unless otherwise specified)\n\n**Important notes:**\n- Excessive or unnecessary use of forceReload can lead to increased latency, higher resource consumption, and potential service degradation.\n- This flag does not validate the correctness or integrity of the source data; it only ensures the latest available data is fetched.\n- Downstream systems or processes should be designed to handle the potential delays or transient states caused by forced reloads.\n- Coordination with cache invalidation policies and data synchronization mechanisms is essential to maintain overall system consistency.\n\n**Dependency chain:**\n- Relies on underlying cache management and invalidation frameworks to effectively bypass stored data.\n- Interacts with data retrieval modules, configuration loaders, and possibly distributed synchronization services.\n- May trigger logging, monitoring, or"},"ioEnvironment":{"type":"string","description":"ioEnvironment specifies the input/output environment configuration for the NetSuite distributed system, defining how data is handled, processed, and routed across different operational environments. This property determines the context in which I/O operations occur, influencing data flow, security protocols, performance characteristics, and consistency guarantees within the distributed architecture.\n\n**Field behavior**\n- Determines the operational context for all input/output processes within the distributed NetSuite system.\n- Influences how data is read from and written to various storage systems, message queues, or communication channels.\n- Affects performance tuning, security measures, and data consistency mechanisms based on the selected environment.\n- Typically set during system initialization or configuration phases and remains stable during runtime to ensure predictable behavior.\n- May trigger environment-specific logging, monitoring, and error-handling strategies.\n\n**Implementation guidance**\n- Validate the ioEnvironment value against a predefined set of supported environments such as \"development,\" \"staging,\" \"production,\" and any custom configurations.\n- Ensure that the selected environment is compatible with other system settings related to data handling, network communication, and security policies.\n- Implement robust error handling and fallback mechanisms to manage unsupported or invalid environment values gracefully.\n- Clearly document the operational implications, limitations, and recommended use cases for each environment option to guide system administrators and developers.\n- Coordinate environment settings across all distributed nodes to maintain consistency and prevent configuration drift.\n\n**Examples**\n- \"development\" — used for local testing and debugging with relaxed security and simplified data handling.\n- \"staging\" — a pre-production environment that closely mirrors production settings for validation and testing.\n- \"production\" — the live environment optimized for security, performance, and data integrity.\n- \"custom\" — user-defined environment configurations tailored for specialized I/O requirements or experimental setups.\n\n**Important notes**\n- Changing the ioEnvironment typically requires restarting services or reinitializing connections to apply new configurations.\n- The environment setting directly impacts data integrity, access controls, and compliance with security policies.\n- Sensitive data must be handled according to the security standards appropriate for the selected environment.\n- Consistency across all distributed nodes is critical; all nodes should be configured with compatible ioEnvironment values to avoid data inconsistencies or communication failures.\n- Misconfiguration can lead to degraded performance, security vulnerabilities, or data loss.\n\n**Dependency chain**\n- Depends on system initialization and configuration management components.\n- Interacts with data storage modules, network communication layers, and security frameworks.\n- Influences logging, monitoring, and error"},"ioDomain":{"type":"string","description":"ioDomain specifies the Internet domain name used for input/output operations within the distributed NetSuite environment. This domain is critical for routing data requests and responses between distributed components and services, ensuring seamless communication and integration across the system.\n\n**Field behavior**\n- Defines the domain name utilized for network communication in distributed NetSuite environments.\n- Serves as the base domain for constructing URLs for API calls, data synchronization, and service endpoints.\n- Must be a valid, fully qualified domain name (FQDN) adhering to DNS standards.\n- Typically remains consistent within a deployment environment but can differ across environments such as development, staging, and production.\n- Influences routing, load balancing, and failover mechanisms within distributed services.\n\n**Implementation guidance**\n- Verify that the domain is properly configured in DNS and is resolvable by all distributed components.\n- Validate the domain format against standard domain naming conventions (e.g., RFC 1035).\n- Ensure the domain supports secure communication protocols (e.g., HTTPS with valid SSL/TLS certificates).\n- Coordinate updates to ioDomain with network, security, and operations teams to maintain service continuity.\n- When migrating or scaling services, update ioDomain accordingly and propagate changes to all dependent components.\n- Monitor domain accessibility and performance to detect and resolve connectivity issues promptly.\n\n**Examples**\n- \"api.netsuite.com\"\n- \"distributed-services.companydomain.com\"\n- \"staging-netsuite.io.company.com\"\n- \"eu-west-1.api.netsuite.com\"\n- \"dev-networks.internal.company.com\"\n\n**Important notes**\n- Incorrect or misconfigured ioDomain values can cause failed network requests, service interruptions, and data synchronization errors.\n- The domain must support necessary security certificates to enable encrypted communication and protect data in transit.\n- Changes to ioDomain may necessitate updates to firewall rules, proxy configurations, and network security policies.\n- Consistency in ioDomain usage across distributed components is essential to avoid routing conflicts and authentication issues.\n- Consider the impact on caching, CDN configurations, and DNS propagation delays when changing ioDomain.\n\n**Dependency chain**\n- Dependent on underlying network infrastructure, DNS setup, and domain registration.\n- Utilized by distributed service components for constructing communication endpoints.\n- May affect authentication and authorization workflows that rely on domain validation or origin verification.\n- Interacts with security components such as SSL/TLS certificate management and firewall configurations.\n- Influences monitoring, logging, and troubleshooting processes related to network communication."},"lastSyncedDate":{"type":"string","format":"date-time","description":"lastSyncedDate represents the precise date and time when the data was last successfully synchronized between the system and NetSuite. This timestamp is essential for monitoring the freshness, consistency, and integrity of synchronized data, enabling systems to determine whether updates or incremental syncs are necessary.\n\n**Field behavior**\n- Captures the exact date and time of the most recent successful synchronization event.\n- Automatically updates only after a sync operation completes successfully without errors.\n- Serves as a reference point to assess if data is current or requires refreshing.\n- Typically stored and transmitted in ISO 8601 format to maintain uniformity across different systems and platforms.\n- Does not reflect the start time or duration of the synchronization process, only its successful completion.\n\n**Implementation guidance**\n- Record the timestamp in Coordinated Universal Time (UTC) to prevent timezone-related inconsistencies.\n- Update this field exclusively after confirming a successful synchronization to avoid misleading data states.\n- Validate the date and time format rigorously to comply with ISO 8601 standards (e.g., \"YYYY-MM-DDTHH:mm:ssZ\").\n- Utilize this timestamp to drive incremental synchronization logic, data refresh triggers, or audit trails in downstream workflows.\n- Handle cases where the field may be null or missing, indicating that no synchronization has occurred yet.\n\n**Examples**\n- \"2024-06-15T14:30:00Z\"\n- \"2023-12-01T08:45:22Z\"\n- \"2024-01-10T23:59:59Z\"\n\n**Important notes**\n- This timestamp marks the completion of synchronization, not its initiation.\n- Do not update this field if the synchronization process fails or is incomplete.\n- Maintaining timezone consistency (UTC) is critical to avoid synchronization conflicts or data mismatches.\n- The field may be null or omitted if synchronization has never been performed.\n- Systems relying on this field should implement fallback or error handling for missing or invalid timestamps.\n\n**Dependency chain**\n- Depends on successful completion of the synchronization process between the system and NetSuite.\n- Influences downstream processes such as incremental sync triggers, data validation, and audit logging.\n- May be referenced by monitoring or alerting systems to detect synchronization delays or failures.\n\n**Technical details**\n- Stored as a string in ISO 8601 format with UTC timezone designator (e.g., \"YYYY-MM-DDTHH:mm:ssZ\").\n- Should be generated programmatically at the moment synchronization completes successfully"},"settings":{"type":"object","description":"settings: >\n  Configuration settings specific to the distributed module within the NetSuite integration, enabling fine-grained control over distributed processing behavior and performance optimization.\n  **Field behavior**\n  - Encapsulates a collection of key-value pairs representing various configuration parameters that govern the distributed NetSuite integration’s operation.\n  - Includes toggles (boolean flags), numeric thresholds, timeouts, batch sizes, logging options, and other customizable settings relevant to distributed processing workflows.\n  - Typically optional for basic usage but essential for advanced customization, performance tuning, and adapting the integration to specific deployment environments.\n  - Changes to these settings can dynamically alter the integration’s behavior, such as retry logic, concurrency limits, and error handling strategies.\n  **Implementation guidance**\n  - Define each setting with a clear, descriptive key name and an appropriate data type (e.g., integer, boolean, string).\n  - Validate input values rigorously to ensure they fall within acceptable ranges or conform to expected formats to prevent runtime errors.\n  - Provide sensible default values for all settings to maintain stable and predictable integration behavior when explicit configuration is absent.\n  - Document each setting comprehensively, including its purpose, valid values, default, and impact on the integration’s operation.\n  - Consider versioning or schema validation to manage changes in settings structure over time.\n  - Ensure that sensitive information is either excluded or securely handled if included within settings.\n  **Examples**\n  - `{ \"retryCount\": 3, \"enableLogging\": true, \"timeoutSeconds\": 120 }` — configures retry attempts, enables detailed logging, and sets operation timeout.\n  - `{ \"batchSize\": 50, \"useNonProduction\": false }` — sets the number of records processed per batch and specifies production environment usage.\n  - `{ \"maxConcurrentJobs\": 10, \"errorThreshold\": 5, \"logLevel\": \"DEBUG\" }` — limits concurrent jobs, sets error tolerance, and defines logging verbosity.\n  **Important notes**\n  - Modifications to settings may require restarting or reinitializing the integration service to apply changes effectively.\n  - Incorrect or suboptimal configuration can cause integration failures, data inconsistencies, or degraded performance.\n  - Avoid storing sensitive credentials or secrets in settings unless encrypted or otherwise secured.\n  - Settings should be managed carefully in multi-environment deployments to prevent configuration drift.\n  **Dependency chain**\n  - Dependent on the overall NetSuite integration configuration and"},"useSS2Framework":{"type":"boolean","description":"useSS2Framework indicates whether to utilize the SuiteScript 2.0 framework for the NetSuite distributed configuration, enabling modern scripting capabilities and modular architecture within the NetSuite environment.\n\n**Field behavior**\n- Determines if the SuiteScript 2.0 (SS2) framework is enabled for the NetSuite integration.\n- When set to true, the system uses SS2 APIs, modular script definitions, and updated scripting conventions.\n- When set to false or omitted, the system defaults to using SuiteScript 1.0 or legacy frameworks.\n- Influences script loading mechanisms, module resolution, and API compatibility within NetSuite.\n- Impacts debugging, deployment, and maintenance processes due to differences in framework structure.\n\n**Implementation guidance**\n- Set this property to true to leverage modern SuiteScript 2.0 features such as improved modularity, asynchronous processing, and enhanced performance.\n- Verify that all custom scripts, modules, and third-party integrations are fully compatible with SuiteScript 2.0 before enabling this flag.\n- Conduct thorough testing in a non-production or development environment to identify potential issues arising from framework changes.\n- Use this flag to facilitate gradual migration from legacy SuiteScript 1.0 to SuiteScript 2.0, allowing toggling between frameworks during transition phases.\n- Update deployment pipelines and CI/CD processes to accommodate SuiteScript 2.0 packaging and module formats.\n\n**Examples**\n- `useSS2Framework: true` — Enables SuiteScript 2.0 framework usage, activating modern scripting features.\n- `useSS2Framework: false` — Disables SuiteScript 2.0, falling back to legacy SuiteScript 1.0 framework.\n- Property omitted — Defaults to legacy SuiteScript framework (typically 1.0), maintaining backward compatibility.\n\n**Important notes**\n- Enabling the SS2 framework may require refactoring existing scripts to comply with SuiteScript 2.0 syntax, including the use of define/require for module loading.\n- Some legacy APIs, global objects, and modules available in SuiteScript 1.0 may be deprecated or behave differently in SuiteScript 2.0.\n- Performance improvements and new features in SS2 may not be realized if scripts are not properly adapted.\n- Ensure that all scheduled scripts, workflows, and integrations are reviewed for compatibility to prevent runtime errors.\n- Documentation and developer training may be necessary to fully leverage SuiteScript 2.0 capabilities.\n\n**Dependency chain**\n- Dep"},"frameworkVersion":{"type":"object","properties":{"type":{"type":"string","description":"type: The type property specifies the category or classification of the framework version within the NetSuite distributed system. It defines the nature or role of the framework version, such as whether it is a major release, minor update, patch, or experimental build. This classification is essential for managing version control, deployment strategies, and compatibility assessments across the distributed environment.\n\n**Field behavior**\n- Determines how the framework version is identified, categorized, and processed within the system.\n- Influences compatibility checks, update mechanisms, and deployment workflows.\n- Enables filtering, sorting, and selection of framework versions based on their type.\n- Affects automated decision-making processes such as rollback, promotion, or deprecation of versions.\n\n**Implementation guidance**\n- Use standardized, predefined values or enumerations to represent different types (e.g., \"major\", \"minor\", \"patch\", \"experimental\") to ensure consistency.\n- Maintain uniform naming conventions and case sensitivity across all framework versions.\n- Implement validation logic to restrict the type property to allowed categories, preventing invalid or unsupported entries.\n- Document any custom or extended types clearly to avoid ambiguity.\n- Ensure that changes to the type property trigger appropriate notifications or logging for audit purposes.\n\n**Examples**\n- \"major\" — indicating a significant release that introduces new features or breaking changes.\n- \"minor\" — representing smaller updates that add enhancements or non-breaking improvements.\n- \"patch\" — for releases focused on bug fixes, security patches, or minor corrections.\n- \"experimental\" — denoting versions under testing, development, or not intended for production use.\n- \"deprecated\" — marking versions that are no longer supported or recommended for use.\n\n**Important notes**\n- The type value directly impacts deployment strategies, including automated rollouts and rollback procedures.\n- Accurate and consistent typing is critical for automated update systems and dependency management tools to function correctly.\n- Changes to the type property should be documented thoroughly and communicated to all relevant stakeholders to avoid confusion.\n- Misclassification can lead to improper handling of versions, potentially causing system instability or incompatibility.\n- The property should be reviewed regularly to align with evolving release management policies.\n\n**Dependency chain**\n- Depends on the version numbering scheme and release management policies defined in the frameworkVersion.\n- Interacts with deployment, update, and compatibility modules that rely on the type classification to determine appropriate actions.\n- Influences compatibility checks with other components and services within the NetSuite distributed environment.\n- May affect logging, monitoring, and alert"},"enum":{"type":"array","items":{"type":"object"},"description":"A list of predefined string values that represent the allowed versions of the framework in the NetSuite distributed environment. This enumeration restricts the frameworkVersion property to accept only specific, valid version identifiers, ensuring consistency and preventing invalid version usage across configurations and API interactions.\n\n**Field behavior**\n- Defines the complete set of permissible values for the frameworkVersion property.\n- Enforces validation by restricting inputs to only those versions listed in the enum.\n- Facilitates consistent version management across different components and services.\n- Typically implemented as an array of strings, where each string corresponds to a valid framework version identifier.\n- Serves as a source of truth for supported framework versions in the system.\n\n**Implementation guidance**\n- Populate the enum with all currently supported framework version strings, reflecting official releases.\n- Regularly update the enum to add new versions and deprecate obsolete ones in alignment with release cycles.\n- Use the enum to validate user inputs, API requests, and configuration files to prevent invalid or unsupported versions.\n- Integrate the enum values into UI elements such as dropdown menus or selection lists to guide users in choosing valid versions.\n- Ensure synchronization between the enum values and the system’s version recognition logic to avoid discrepancies.\n- Document any changes to the enum clearly to inform developers and users about version support updates.\n\n**Examples**\n- [\"1.0.0\", \"1.1.0\", \"2.0.0\"]\n- [\"v2023.1\", \"v2023.2\", \"v2024.1\"]\n- [\"stable\", \"beta\", \"alpha\"]\n- [\"release-2023Q2\", \"release-2023Q3\", \"release-2024Q1\"]\n\n**Important notes**\n- Enum values must exactly match the version identifiers recognized by the system, including case sensitivity.\n- Modifications to the enum (adding/removing versions) should be performed carefully to maintain backward compatibility.\n- The enum itself does not specify a default version; default version handling should be managed separately in the system.\n- Consistency in formatting and naming conventions of version strings within the enum is critical to avoid confusion.\n- The enum should be treated as authoritative for validation purposes and not overridden by external inputs.\n\n**Dependency chain**\n- Used by the frameworkVersion property to restrict allowed values.\n- Relied upon by validation logic in APIs and configuration parsers.\n- Integrated with UI components for version selection.\n- Maintained in coordination with the system’s version management"},"lowercase":{"type":"object","description":"lowercase: Specifies whether the framework version string should be converted to lowercase characters to ensure consistent casing across outputs and integrations.\n\n**Field behavior**\n- When set to `true`, the framework version string is converted entirely to lowercase characters.\n- When set to `false` or omitted, the framework version string retains its original casing as provided.\n- Influences how the framework version is displayed in logs, API responses, configuration files, or any output where the version string is used.\n- Does not modify the content or structure of the version string, only its letter casing.\n\n**Implementation guidance**\n- Accept only boolean values (`true` or `false`) for this property.\n- Perform the lowercase transformation after the framework version string is generated or retrieved but before it is output, stored, or transmitted.\n- Default behavior should be to preserve the original casing if this property is not explicitly set.\n- Use this property to maintain consistency in environments where case sensitivity affects processing or comparison of version strings.\n- Ensure that any caching or storage mechanisms reflect the transformed casing if this property is enabled.\n\n**Examples**\n- `true` — The framework version `\"V1.2.3\"` becomes `\"v1.2.3\"`.\n- `true` — The framework version `\"v1.2.3\"` remains `\"v1.2.3\"` (already lowercase).\n- `false` — The framework version `\"V1.2.3\"` remains `\"V1.2.3\"`.\n- Property omitted — The framework version string is output exactly as originally provided.\n\n**Important notes**\n- This property only affects letter casing; it does not alter the version string’s format, numeric values, or other characters.\n- Downstream systems or integrations that consume the version string should be verified to handle the casing appropriately.\n- Changing the casing may impact string equality checks or version comparisons if those are case-sensitive.\n- Consider the implications on logging, monitoring, or auditing systems that may rely on exact version string matches.\n\n**Dependency chain**\n- Depends on the presence of a valid framework version string to apply the transformation.\n- Should be evaluated after the framework version is fully constructed or retrieved.\n- May interact with other formatting or normalization properties related to the framework version.\n\n**Technical details**\n- Implemented as a boolean flag within the `frameworkVersion` configuration object.\n- Transformation typically uses standard string lowercase functions provided by the programming environment.\n- Should be applied consistently across"}},"description":"frameworkVersion: The specific version identifier of the software framework used within the NetSuite distributed environment. This version string is essential for tracking the exact iteration of the framework deployed, performing compatibility checks between distributed components, and ensuring consistency across the system. It typically follows semantic versioning or a similar structured versioning scheme to convey major, minor, and patch-level changes, including pre-release or build metadata when applicable.\n\n**Field behavior**\n- Represents the precise version of the software framework currently in use within the distributed environment.\n- Serves as a key reference for verifying compatibility between different components and services.\n- Facilitates debugging, support, and audit processes by clearly identifying the framework iteration.\n- Typically adheres to semantic versioning (e.g., MAJOR.MINOR.PATCH) or a comparable versioning format.\n- Remains stable and immutable once a deployment is finalized to ensure traceability.\n\n**Implementation guidance**\n- Maintain a consistent and standardized version string format, such as \"1.2.3\", \"v2.0.0\", or date-based versions like \"2024.06.01\".\n- Update this property promptly whenever the framework undergoes upgrades, patches, or significant changes.\n- Validate the version string against a predefined list of supported or recognized framework versions to prevent errors.\n- Integrate this property into deployment automation, monitoring, and logging tools to verify correct framework usage.\n- Avoid modifying the frameworkVersion post-deployment to maintain historical accuracy and supportability.\n\n**Examples**\n- \"1.0.0\"\n- \"2.3.5\"\n- \"v3.1.0-beta\"\n- \"2024.06.01\"\n- \"1.4.0-rc1\"\n\n**Important notes**\n- Accurate frameworkVersion values are critical to prevent compatibility issues and runtime failures in distributed systems.\n- Missing, incorrect, or inconsistent version identifiers can lead to deployment errors, integration problems, or difficult-to-trace bugs.\n- This property should be treated as a source of truth for framework versioning within the NetSuite distributed environment.\n- Coordination with other versioning properties (e.g., applicationVersion, apiVersion) is important for holistic version management.\n\n**Dependency chain**\n- Dependent on the overarching NetSuite distributed system versioning and release management strategy.\n- Closely related to other versioning properties such as applicationVersion and apiVersion for comprehensive compatibility checks.\n- Influences deployment pipelines, runtime environment validations, and compatibility enforcement"}},"description":"Indicates whether the transaction or record is distributed across multiple departments, locations, classes, or subsidiaries within the NetSuite system, allowing for detailed allocation of amounts or quantities for financial tracking and reporting purposes.\n\n**Field behavior**\n- Specifies if the transaction’s amounts or quantities are allocated across multiple organizational segments such as departments, locations, classes, or subsidiaries.\n- When set to `true`, the transaction supports detailed distribution, enabling granular financial analysis and reporting.\n- When set to `false` or omitted, the transaction is treated as assigned to a single segment without any distribution.\n- Influences how the transaction data is processed, posted, and reported within NetSuite’s financial modules.\n\n**Implementation guidance**\n- Use this boolean field to indicate whether a transaction involves distributed allocations.\n- When `distributed` is `true`, ensure that corresponding distribution details (e.g., department, location, class, subsidiary allocations) are provided in related fields to fully define the distribution.\n- Validate that the sum of all distributed amounts or quantities equals the total transaction amount to maintain data integrity.\n- Confirm that the specific NetSuite record type supports distribution before setting this field to `true`.\n- Handle this field carefully in integrations to avoid discrepancies in accounting or reporting.\n\n**Examples**\n- `distributed: true` — The transaction amounts are allocated across multiple departments and locations.\n- `distributed: false` — The transaction is assigned to a single department without any distribution.\n- Omitted `distributed` field — Defaults to non-distributed transaction behavior.\n\n**Important notes**\n- Enabling distribution (`distributed: true`) often requires additional detailed data to specify how amounts are allocated.\n- Not all transaction or record types in NetSuite support distribution; verify compatibility beforehand.\n- Incorrect or incomplete distribution data can lead to accounting errors or integration failures.\n- Distribution affects financial reporting and posting; ensure consistency across related fields.\n\n**Dependency chain**\n- Commonly used alongside fields specifying distribution details such as `department`, `location`, `class`, and `subsidiary`.\n- May impact related posting, reporting, and reconciliation processes within NetSuite’s financial modules.\n- Dependent on the transaction type’s capability to support distributed allocations.\n\n**Technical details**\n- Data type: Boolean (`true` or `false`).\n- Default behavior when omitted is typically `false` (non-distributed).\n- Must be synchronized with distribution detail records to ensure accurate financial data.\n- Changes to this field may trigger validation or"},"getList":{"type":"object","properties":{"type":{"type":"array","items":{"type":"object"},"description":"type: >\n  Specifies the category or classification of the records to be retrieved from the NetSuite system.\n  This property determines the type of entities that the getList operation will query and return.\n  It defines the scope of the data retrieval by indicating which NetSuite record type the API should target.\n  **Field behavior**\n  - Defines the specific record type to fetch, such as customers, transactions, or items.\n  - Influences the structure, fields, and format of the returned data based on the selected record type.\n  - Must be set to a valid NetSuite record type identifier recognized by the API.\n  - Directly impacts the filtering, sorting, and pagination capabilities available for the query.\n  **Implementation guidance**\n  - Use predefined constants or enumerations representing NetSuite record types to avoid errors and ensure consistency.\n  - Validate the type value before making the API call to confirm it corresponds to a supported and accessible record type.\n  - Consider user permissions and roles associated with the record type to ensure the API caller has appropriate access rights.\n  - Review NetSuite documentation for the exact record type identifiers and their expected behaviors.\n  - When possible, test with sample queries to verify the returned data matches expectations for the specified type.\n  **Examples**\n  - \"customer\"\n  - \"salesOrder\"\n  - \"inventoryItem\"\n  - \"employee\"\n  - \"vendor\"\n  - \"purchaseOrder\"\n  **Important notes**\n  - Incorrect or unsupported type values will result in API errors, empty responses, or unexpected data structures.\n  - The type property directly affects query performance, response size, and the complexity of the returned data.\n  - Some record types may require additional filters, parameters, or specific permissions to retrieve meaningful or complete data.\n  - Changes in NetSuite schema or API versions may introduce new record types or deprecate existing ones; keep the type values up to date.\n  **Dependency chain**\n  - The 'type' property is a required input for the NetSuite.getList operation.\n  - The value of 'type' determines the schema, fields, and structure of the records returned in the response.\n  - Other properties, filters, or parameters in the getList operation may depend on or vary according to the specified 'type'.\n  - Validation and error handling mechanisms rely on the correctness of the 'type' value.\n  **Technical details**\n  - Accepts string values corresponding"},"typeId":{"type":"string","description":"typeId: >\n  The unique identifier representing the specific type of record or entity to be retrieved in the NetSuite getList operation.\n  **Field behavior**\n  - Specifies the category or type of records to fetch from NetSuite.\n  - Determines the schema and fields available in the returned records.\n  - Must correspond to a valid NetSuite record type identifier.\n  **Implementation guidance**\n  - Use predefined NetSuite record type IDs as per NetSuite documentation.\n  - Validate the typeId before making the API call to avoid errors.\n  - Ensure the typeId aligns with the permissions and roles of the API user.\n  **Examples**\n  - \"customer\" for customer records.\n  - \"salesOrder\" for sales order records.\n  - \"employee\" for employee records.\n  **Important notes**\n  - Incorrect or unsupported typeId values will result in API errors.\n  - The typeId is case-sensitive and must match NetSuite's expected values.\n  - Changes in NetSuite's API or record types may affect valid typeId values.\n  **Dependency chain**\n  - Depends on the NetSuite record types supported by the account.\n  - Influences the structure and content of the getList response.\n  **Technical details**\n  - Typically a string value representing the internal NetSuite record type.\n  - Used as a parameter in the getList API endpoint to filter records.\n  - Must be URL-encoded if used in query parameters."},"internalId":{"type":"string","description":"Unique identifier assigned internally to an entity within the NetSuite system. This identifier is used to reference and retrieve specific records programmatically.\n\n**Field behavior**\n- Serves as the primary key for identifying records in NetSuite.\n- Used in API calls to fetch, update, or delete specific records.\n- Immutable once assigned to a record.\n- Typically a numeric or alphanumeric string.\n\n**Implementation guidance**\n- Must be provided when performing operations that require precise record identification.\n- Should be validated to ensure it corresponds to an existing record before use.\n- Avoid exposing internalId values in public-facing contexts to maintain data security.\n- Use in conjunction with other identifiers if necessary to ensure correct record targeting.\n\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"A1B2C3\"\n\n**Important notes**\n- internalId is unique within the scope of the record type.\n- Different record types may have overlapping internalId values; always confirm the record type context.\n- Not user-editable; assigned by NetSuite upon record creation.\n- Essential for batch operations where multiple records are processed by their internalIds.\n\n**Dependency chain**\n- Dependent on the record type being accessed or manipulated.\n- Used alongside other API parameters such as record type or externalId for comprehensive identification.\n\n**Technical details**\n- Data type: string (often numeric but can include alphanumeric characters).\n- Read-only from the API consumer perspective.\n- Returned in API responses when querying records.\n- Used as a key parameter in getList, get, update, and delete API operations."},"externalId":{"type":"string","description":"A unique identifier assigned to an entity or record by an external system, used to reference or synchronize data between systems.\n\n**Field behavior**\n- Serves as a unique key to identify records originating outside the current system.\n- Used to retrieve, update, or synchronize records with external systems.\n- Typically immutable once assigned to maintain consistent references.\n- May be optional or required depending on the integration context.\n\n**Implementation guidance**\n- Ensure the externalId is unique within the scope of the external system.\n- Validate the format and length according to the external system’s specifications.\n- Use this field to map or link records between the local system and external sources.\n- Handle cases where the externalId might be missing or duplicated gracefully.\n- Document the source system and context for the externalId to avoid ambiguity.\n\n**Examples**\n- \"INV-12345\" (Invoice number from an accounting system)\n- \"CRM-987654321\" (Customer ID from a CRM platform)\n- \"EXT-USER-001\" (User identifier from an external user management system)\n\n**Important notes**\n- The externalId is distinct from the internal system’s primary key or record ID.\n- Changes to the externalId can disrupt synchronization and should be avoided.\n- When integrating multiple external systems, ensure externalIds are namespaced or otherwise differentiated.\n- Not all records may have an externalId if they originate solely within the local system.\n\n**Dependency chain**\n- Often used in conjunction with other identifiers like internal IDs or system-specific keys.\n- May depend on authentication or authorization to access external system data.\n- Relies on consistent data synchronization processes to maintain accuracy.\n\n**Technical details**\n- Typically represented as a string data type.\n- May include alphanumeric characters, dashes, or underscores.\n- Should be indexed in databases for efficient lookup.\n- May require encoding or escaping if used in URLs or queries."},"_id":{"type":"object","description":"_id: The unique identifier for a record within the NetSuite system. This identifier is used to retrieve, update, or reference specific records in API operations.\n**Field behavior**\n- Serves as the primary key for records in NetSuite.\n- Must be unique within the context of the record type.\n- Used to fetch or manipulate specific records via API calls.\n- Immutable once assigned to a record.\n**Implementation guidance**\n- Ensure the _id is correctly captured from NetSuite responses when retrieving records.\n- Validate the _id format as per NetSuite’s specifications before using it in requests.\n- Use the _id to perform precise operations such as updates or deletions.\n- Handle cases where the _id may not be present or is invalid gracefully.\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"987654321\"\n**Important notes**\n- The _id is critical for identifying records uniquely; incorrect usage can lead to data inconsistencies.\n- Do not generate or alter _id values manually; always use those provided by NetSuite.\n- The _id is typically a numeric string but confirm with the specific NetSuite record type.\n**Dependency chain**\n- Dependent on the record type being accessed or manipulated.\n- Used in conjunction with other record fields for comprehensive data operations.\n**Technical details**\n- Typically represented as a string or numeric value.\n- Returned in API responses when listing or retrieving records.\n- Required in API requests for operations targeting specific records."}},"description":"getList: Retrieves a list of records from the NetSuite system based on specified criteria and parameters. This operation enables fetching multiple records in a single request, optimizing data retrieval and processing efficiency. It supports filtering, sorting, and pagination to manage large datasets effectively, and returns both the matching records and relevant metadata about the query results.\n\n**Field behavior**\n- Accepts parameters defining the record type to retrieve, along with optional filters, search criteria, and sorting options.\n- Returns a collection (list) of records that match the specified criteria.\n- Supports pagination by allowing clients to specify limits and offsets or use tokens to navigate through large result sets.\n- Includes metadata such as total record count, current page number, and page size to facilitate client-side data handling.\n- May return partial data if the dataset exceeds the maximum allowed records per request.\n- Handles cases where no records match the criteria by returning an empty list with appropriate metadata.\n\n**Implementation guidance**\n- Require explicit specification of the record type to ensure accurate data retrieval.\n- Validate and sanitize all input parameters (filters, sorting, pagination) to prevent malformed queries and optimize performance.\n- Implement robust pagination logic to allow clients to retrieve subsequent pages seamlessly.\n- Provide clear error messages and status codes for scenarios such as invalid parameters, unauthorized access, or record type not found.\n- Ensure compliance with NetSuite API rate limits and handle throttling gracefully.\n- Support common filter operators (e.g., equals, contains, greater than) consistent with NetSuite’s search capabilities.\n- Return consistent and well-structured response formats to facilitate client parsing and integration.\n\n**Examples**\n- Retrieving a list of customer records filtered by status (e.g., Active) and creation date range.\n- Fetching a batch of sales orders placed within a specific date range, sorted by order date descending.\n- Obtaining a paginated list of inventory items filtered by category and availability status.\n- Requesting the first 50 employee records with a specific job title, then fetching subsequent pages as needed.\n- Searching for vendor records containing a specific keyword in their name or description.\n\n**Important notes**\n- The maximum number of records returned per request is subject to NetSuite API limits, which may require multiple paginated requests for large datasets.\n- Proper authentication and authorization are mandatory to access the requested records; insufficient permissions will result in access errors.\n- The structure and fields of the returned records vary depending on the specified record type and the fields requested or defaulted"},"searchPreferences":{"type":"object","properties":{"bodyFieldsOnly":{"type":"boolean","description":"bodyFieldsOnly indicates whether the search results should include only the body fields of the records, excluding any joined or related record fields. This setting controls the scope of data returned by the search operation, allowing for more focused and efficient retrieval when only the main record's fields are necessary.\n\n**Field behavior**\n- When set to true, search results will include only the fields that belong directly to the main record (body fields), excluding any fields from joined or related records.\n- When set to false or omitted, search results may include fields from both the main record and any joined or related records specified in the search.\n- Directly affects the volume and detail of data returned, potentially reducing payload size and improving performance.\n- Influences how the search engine processes and compiles the result set, limiting it to primary record data when enabled.\n\n**Implementation guidance**\n- Use this property to optimize search performance and reduce data transfer when only the main record's fields are required.\n- Set to true to minimize payload size, which is beneficial for large datasets or bandwidth-sensitive environments.\n- Verify that your search criteria and downstream processing do not require any joined or related record fields before enabling this option.\n- If joined fields are necessary for your application logic, keep this property false or unset to ensure complete data retrieval.\n- Consider this setting in conjunction with other search preferences like pageSize and returnSearchColumns for optimal results.\n\n**Examples**\n- `bodyFieldsOnly: true` — returns only the main record’s body fields in search results, excluding any joined record fields.\n- `bodyFieldsOnly: false` — returns both body fields and fields from joined or related records as specified in the search.\n- Omitted `bodyFieldsOnly` property — defaults to false behavior, including joined fields if requested.\n\n**Important notes**\n- Enabling bodyFieldsOnly may omit critical related data if your search logic depends on joined fields, potentially impacting application functionality.\n- This setting is particularly useful for improving performance and reducing data size in scenarios where joined data is unnecessary.\n- Not all record types or search operations may support this preference; verify compatibility with your specific use case.\n- Changes to this setting can affect the structure and completeness of search results, so test thoroughly when modifying.\n\n**Dependency chain**\n- This property is part of the `searchPreferences` object within the NetSuite API request.\n- It influences the fields returned by the search operation, affecting both data scope and payload size.\n- May"},"pageSize":{"type":"number","description":"pageSize specifies the number of search results to be returned per page in a paginated search response. This property controls the size of each page of results when performing searches, enabling efficient handling and retrieval of large datasets by dividing them into manageable chunks. By adjusting pageSize, clients can balance between the volume of data received per request and the performance implications of processing large result sets.\n\n**Field behavior**\n- Determines the maximum number of records returned in a single page of search results.\n- Directly influences the pagination mechanism by setting how many items appear on each page.\n- Helps optimize network and client performance by limiting the amount of data transferred and processed per response.\n- Affects the total number of pages available, calculated based on the total number of search results divided by pageSize.\n- When pageSize is changed between requests, it may affect the consistency of pagination navigation.\n\n**Implementation guidance**\n- Assign pageSize a positive integer value that balances response payload size and system performance.\n- Ensure the value respects any minimum and maximum limits imposed by the API or backend system.\n- Maintain consistent pageSize values across paginated requests to provide predictable and stable navigation through result pages.\n- Consider client device capabilities, network bandwidth, and expected user interaction patterns when selecting pageSize.\n- Implement validation to prevent invalid or out-of-range values that could cause errors or degraded performance.\n- When dealing with very large datasets, consider smaller pageSize values to reduce memory consumption and improve responsiveness.\n\n**Examples**\n- pageSize: 25 — returns 25 search results per page, suitable for standard list views.\n- pageSize: 100 — returns 100 search results per page, useful for bulk data processing or export scenarios.\n- pageSize: 10 — returns 10 search results per page, ideal for quick previews or limited bandwidth environments.\n- pageSize: 50 — a moderate setting balancing data volume and performance for typical use cases.\n\n**Important notes**\n- Excessively large pageSize values can increase response times, memory usage, and may lead to timeouts or throttling.\n- Very small pageSize values can cause a high number of API calls, increasing overall latency and server load.\n- The API may enforce maximum allowable pageSize limits; requests exceeding these limits may result in errors or automatic truncation.\n- Changing pageSize mid-pagination can disrupt user experience by altering the number of pages and item offsets.\n- Some APIs may have default pageSize values if none is specified; explicitly setting page"},"returnSearchColumns":{"type":"boolean","description":"returnSearchColumns: Specifies whether the search operation should return the columns (fields) defined in the search results, providing detailed data for each record matching the search criteria.\n\n**Field behavior**\n- Determines if the search response includes the columns specified in the search definition, such as field values and metadata.\n- When set to true, the search results will contain detailed column data for each record, enabling comprehensive data retrieval.\n- When set to false, the search results will omit column data, potentially returning only record identifiers or minimal information.\n- Directly influences the amount of data returned, impacting response payload size and processing time.\n- Affects how client applications can utilize the search results, depending on the presence or absence of column data.\n\n**Implementation guidance**\n- Set to true when detailed search result data is required for processing, reporting, or display purposes.\n- Set to false to optimize performance and reduce bandwidth usage when only record IDs or minimal data are needed.\n- Use in conjunction with other search preference settings (e.g., `pageSize`, `returnSearchRows`) to fine-tune search responses.\n- Ensure client applications are designed to handle both scenarios—presence or absence of column data—to avoid errors or incomplete processing.\n- Consider the trade-off between data completeness and performance when configuring this property.\n\n**Examples**\n- `returnSearchColumns: true` — The search results will include all defined columns for each record, such as names, dates, and custom fields.\n- `returnSearchColumns: false` — The search results will exclude column data, returning only basic record information like internal IDs.\n\n**Important notes**\n- Enabling returnSearchColumns may significantly increase response size and processing time, especially for searches returning many records or columns.\n- Some search operations or API endpoints may require columns to be returned to function correctly or to provide meaningful results.\n- Disabling this option can improve performance but limits the detail available in search results, which may affect downstream processing or user interfaces.\n- Changes to this setting can impact caching, pagination, and sorting behaviors depending on the search implementation.\n\n**Dependency chain**\n- Related to other `searchPreferences` properties such as `pageSize` (controls number of records per page) and `returnSearchRows` (controls whether search rows are returned).\n- Works in tandem with search definition settings that specify which columns are included in the search.\n- May affect or be affected by API-level configurations or limitations on data retrieval and response formatting."}},"description":"searchPreferences: Preferences that control the behavior and parameters of search operations within the NetSuite environment, enabling customization of how search queries are executed and how results are returned to optimize relevance, performance, and user experience.\n\n**Field behavior**\n- Defines the execution parameters for search queries, including pagination, sorting, filtering, and result formatting.\n- Controls the scope, depth, and granularity of data retrieved during search operations.\n- Influences the performance, accuracy, and relevance of search results based on configured preferences.\n- Can be adjusted dynamically to tailor search behavior to specific user roles, contexts, or application requirements.\n- May include settings such as page size limits, sorting criteria, case sensitivity, and filter application.\n\n**Implementation guidance**\n- Utilize this property to fine-tune search operations to meet specific user or application needs, improving efficiency and relevance.\n- Validate all preference values against supported NetSuite search parameters to prevent errors or unexpected behavior.\n- Establish sensible default preferences to ensure consistent and predictable search results when explicit preferences are not provided.\n- Allow dynamic updates to preferences to adapt to changing contexts, such as different user roles or data volumes.\n- Ensure that preference configurations comply with user permissions and role-based access controls to maintain security and data integrity.\n\n**Examples**\n- Setting a page size of 50 to limit the number of records returned per search query for better performance.\n- Enabling case-insensitive search filters to broaden result matching.\n- Specifying sorting order by transaction date in descending order to show the most recent records first.\n- Applying filters to restrict search results to a particular customer segment or date range.\n- Configuring search to exclude inactive records to streamline results.\n\n**Important notes**\n- Misconfiguration of searchPreferences can lead to incomplete, irrelevant, or inefficient search results, negatively impacting user experience.\n- Certain preferences may be restricted or overridden based on user roles, permissions, or API version constraints.\n- Changes to searchPreferences can affect system performance; excessive page sizes or complex filters may increase load times.\n- Always verify compatibility of preference settings with the specific NetSuite API version and environment in use.\n- Consider the impact of preferences on downstream processes that consume search results.\n\n**Dependency chain**\n- Depends on the overall search operation configuration and the specific search type being performed.\n- Interacts with user authentication and authorization settings to enforce access controls on search results.\n- Influences and is influenced by data retrieval mechanisms and indexing strategies within NetSuite.\n- Works in conjunction with"},"file":{"type":"object","description":"Configuration for retrieving files from NetSuite file cabinet and PARSING them into records. Use this for structured file exports (CSV, XML, JSON) where the file content should be parsed into data records.\n\n**Critical:** When to use file vs blob\n- Use `netsuite.file` WITH export `type: null/undefined` for file exports WITH parsing (CSV, XML, JSON)\n- Use `netsuite.blob` WITH export `type: \"blob\"` for raw binary transfers WITHOUT parsing\n\nWhen you want file content to be parsed into individual records, use this `file` configuration and leave the export's `type` field as null or undefined (standard export). Do NOT set `type: \"blob\"` when using this configuration.","properties":{"folderInternalId":{"type":"string","description":"The internal ID of the NetSuite File Cabinet folder from which files will be exported.\n\nSpecify the internal ID for the NetSuite File Cabinet folder from which you want to export your files. If the folder internal ID is required to be dynamic based on the data you are integrating, you can specify the JSON path to the field in your data containing the folder internal ID values instead. For example, {{{myFileField.fileName}}}.\n\n**Field behavior**\n- Identifies the specific folder in NetSuite's file cabinet to export files from\n- Must be a valid internal ID that exists in the NetSuite environment\n- Supports dynamic values using handlebars notation for data-driven folder selection\n- The internal ID is distinct from folder names or paths; it's a stable numeric identifier\n\n**Implementation guidance**\n- Obtain the folderInternalId via NetSuite's UI (File Cabinet > folder properties) or API\n- For static exports, use the numeric internal ID directly (e.g., \"12345\")\n- For dynamic exports, use handlebars syntax to reference a field in your data\n- Verify folder permissions - the integration user must have access to the folder\n- The folderInternalId is distinct from folder names or folder paths; it is a stable numeric identifier used internally by NetSuite.\n- Using an incorrect or non-existent folderInternalId will result in errors or unintended file placement.\n- Folder permissions and user roles can impact the ability to perform operations even with a valid folderInternalId.\n- Folder hierarchy changes do not affect the folderInternalId, ensuring persistent reference integrity.\n\n**Dependency chain**\n- Depends on the existence of the folder within the NetSuite file cabinet.\n- Requires appropriate user permissions to access or modify the folder.\n- Often used in conjunction with file identifiers and other file metadata fields.\n- May be linked to folder creation or folder search operations to retrieve valid IDs.\n\n**Examples**\n- \"12345\" - Static folder internal ID\n- \"67890\" - Another valid folder internal ID\n- \"{{{record.folderId}}}\" - Dynamic folder ID from integration data\n- \"{{{myFileField.fileName}}}\" - Dynamic value from a field in your data\n\n**Important notes**\n- Using an incorrect or non-existent folderInternalId will result in errors\n- Folder hierarchy changes do not affect the folderInternalId\n- Internal IDs may differ between non-production and production environments"},"backupFolderInternalId":{"type":"string","description":"backupFolderInternalId is the internal identifier of the backup folder within the NetSuite file cabinet where backup files are stored. This ID uniquely identifies the folder location used for saving backup files programmatically, ensuring that backup operations target the correct directory within the NetSuite environment.\n\n**Field behavior**\n- Represents a unique internal ID assigned by NetSuite to a specific folder in the file cabinet.\n- Directs backup operations to the designated folder location for storing backup files.\n- Must correspond to an existing and accessible folder within the NetSuite file cabinet.\n- Typically handled as an integer value in API requests and responses.\n- Immutable for a given folder; changing the folder requires updating this ID accordingly.\n\n**Implementation guidance**\n- Verify that the folder with this internal ID exists before initiating backup operations.\n- Confirm that the folder has the necessary permissions to allow writing and managing backup files.\n- Use NetSuite SuiteScript APIs or REST API calls to retrieve and validate folder internal IDs dynamically.\n- Avoid hardcoding the internal ID; instead, use configuration files, environment variables, or administrative settings to maintain flexibility across environments.\n- Implement error handling to manage cases where the folder ID is invalid, missing, or inaccessible.\n- Consider environment-specific IDs for non-production versus production to prevent misdirected backups.\n\n**Examples**\n- 12345\n- 67890\n- 112233\n\n**Important notes**\n- The internal ID is unique per NetSuite account and environment; IDs do not transfer between non-production and production.\n- Deleting or renaming the folder associated with this ID will disrupt backup processes until updated.\n- Proper access rights and permissions are mandatory to write backup files to the specified folder.\n- Changes to folder structure or permissions should be coordinated with backup scheduling to avoid failures.\n- This property is critical for ensuring backup data integrity and recoverability within NetSuite.\n\n**Dependency chain**\n- Depends on the existence and accessibility of the folder in the NetSuite file cabinet.\n- Interacts with backup scheduling, file naming conventions, and storage management properties.\n- May be linked with authentication and authorization mechanisms controlling file cabinet access.\n- Relies on NetSuite API capabilities to manage and reference file cabinet folders.\n\n**Technical details**\n- Data type: Integer (typically a 32-bit integer).\n- Represents the internal NetSuite folder ID, not the folder name or path.\n- Used in API payloads to specify backup destination folder.\n- Must be retrieved or confirmed via NetSuite SuiteScript"}}}},"required":[]},"RDBMS":{"type":"object","description":"Configuration object for Relational Database Management System (RDBMS) data integration exports.\n\nThis object is REQUIRED when the _connectionId field references an RDBMS database connection\nand must not be included for other connection types. It defines how data is extracted\nfrom relational databases using SQL queries.\n\n**Rdbms export capabilities**\n- Execute custom SQL SELECT statements\n- Support for joins, aggregations, and functions\n- Flexible data retrieval from any accessible tables or views\n- Compatible with all major database systems\n\n**Critical:** WHAT BELONGS IN THIS OBJECT\n- `query` - **ALWAYS REQUIRED** - The SQL SELECT statement\n- `once` - **REQUIRED** when the export's Object Type is `\"once\"` (set _include_once: true)\n- **DO NOT** put `delta` inside this object - delta is handled via the query\n\n**Delta exports (type: \"delta\")**\nFor delta/incremental exports, do NOT populate a `delta` object inside `rdbms`.\nInstead, use `{{lastExportDateTime}}` or `{{currentExportDateTime}}` directly in the query:\n```json\n{\n  \"type\": \"delta\",\n  \"rdbms\": {\n    \"query\": \"SELECT * FROM customers WHERE updatedAt > {{lastExportDateTime}}\"\n  }\n}\n```\n\n**Once exports (type: \"once\")**\nFor once exports (mark records as processed), populate `rdbms.once.query`:\n```json\n{\n  \"type\": \"once\",\n  \"rdbms\": {\n    \"query\": \"SELECT * FROM customers WHERE exported = false\",\n    \"once\": {\n      \"query\": \"UPDATE customers SET exported = true WHERE id = {{record.id}}\"\n    }\n  }\n}\n```\n\n**Standard exports (type: null or not specified)**\nJust provide the query:\n```json\n{\n  \"rdbms\": {\n    \"query\": \"SELECT * FROM customers WHERE status = 'ACTIVE'\"\n  }\n}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - SQL SELECT query for retrieving data from the database.\n\nThis field contains the SQL SELECT statement that will be executed to fetch data\nfrom the database. The query can range from simple table selections to complex\njoins and aggregations.\n\nExamples:\n- Basic: `SELECT id, name, email FROM customers WHERE status = 'ACTIVE'`\n- Join: `SELECT o.id, c.name, o.amount FROM orders o JOIN customers c ON o.customer_id = c.id`\n- Aggregate: `SELECT category, COUNT(*) as count FROM orders GROUP BY category`\n- Parameterized: `SELECT * FROM orders WHERE customer_id = {{record.customer_id}}`\n"},"once":{"type":"object","description":"**CRITICAL: REQUIRED when the export's Object Type is \"once\".**\n\nIf Object Type is \"once\", you MUST set _include_once to true (or include this object).\n\nThis object has ONLY ONE property: \"query\" (a SQL UPDATE string).\nDO NOT create any other properties like \"update\", \"table\", \"set\", \"where\", etc.\n\nCORRECT format:\n```json\n{\"query\": \"UPDATE customers SET exported=true WHERE id={{record.id}}\"}\n```\n\nWRONG format (DO NOT DO THIS):\n```json\n{\"update\": {\"table\": \"customers\", \"set\": {...}}}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - A SQL UPDATE statement string to mark records as processed.\n\nThis is a plain SQL UPDATE query string, NOT a structured object.\n\nThe query runs AFTER each record is successfully exported, setting a flag\nto indicate the record has been processed.\n\nFormat: \"UPDATE <table> SET <column>=<value> WHERE <id_column>={{record.<id_field>}}\"\n\nExample: \"UPDATE customers SET exported=true WHERE id={{record.id}}\"\n\nThe {{record.id}} placeholder is replaced with the actual record ID from each exported row.\n"}}}}},"S3":{"type":"object","description":"Configuration object for Amazon S3 (Simple Storage Service) data integration exports.\n\nThis object is REQUIRED when the _connectionId field references an AWS S3 connection\nand must not be included for other connection types. It defines how files are retrieved\nfrom S3 buckets for processing in integrations.\n\nThe S3 export object has the following requirements:\n\n- Required fields: region, bucket\n- Optional fields: keyStartsWith, keyEndsWith, backupBucket, keyPrefix\n\n**Purpose**\n\nThis configuration specifies:\n- Which S3 bucket to retrieve files from\n- How to filter files by key patterns\n- Where to move files after retrieval (optional)\n","properties":{"region":{"type":"string","description":"The AWS region where the S3 bucket is located.\n\n- REQUIRED for all S3 exports\n- Must be a valid AWS region identifier (e.g., us-east-1, eu-west-1)\n- Case-insensitive (will be normalized to lowercase)\n"},"bucket":{"type":"string","description":"The S3 bucket name to retrieve files from.\n\n- REQUIRED for all S3 exports\n- Must be a valid existing S3 bucket name\n- Globally unique across all AWS accounts\n- AWS credentials must have s3:ListBucket and s3:GetObject permissions\n"},"keyStartsWith":{"type":"string","description":"Optional prefix filter for S3 object keys.\n\n- Filters files based on the beginning of their keys\n- Functions as a directory path in S3's flat storage structure\n- Case-sensitive (S3 keys are case-sensitive)\n- Examples:\n  - `\"exports/\"` - retrieves files in the exports \"directory\"\n  - `\"customer/orders/2023/\"` - retrieves files in this nested path\n  - `\"invoice_\"` - retrieves files starting with \"invoice_\"\n\nWhen used with keyEndsWith, files must match both criteria.\n"},"keyEndsWith":{"type":"string","description":"Optional suffix filter for S3 object keys.\n\n- Commonly used to filter by file extension\n- Case-sensitive (S3 keys are case-sensitive)\n- Examples:\n  - `\".csv\"` - retrieves only CSV files\n  - `\"_FINAL.xml\"` - retrieves only XML files with _FINAL suffix\n  - `\"_READY\"` - retrieves files with status indicator\n\nWhen used with keyStartsWith, files must match both criteria.\n"},"backupBucket":{"type":"string","description":"Optional destination bucket where files are moved before deletion.\n\n- If omitted, files are deleted from the source bucket after successful export\n- Must be a valid existing S3 bucket in the same region\n- AWS credentials must have s3:PutObject permissions on this bucket\n- Provides an independent backup of exported files\n\nIMPORTANT: Celigo automatically deletes files from the source bucket after\nsuccessful export. The backup bucket is for users who want to maintain their\nown independent backup of exported files. Celigo also maintains its own backup\nof processed files for a set period of time.\n"},"keyPrefix":{"type":"string","description":"Optional prefix to prepend to keys when moving to backup bucket.\n\n- Used only when backupBucket is specified\n- Prepended to the original filename when moved to backup\n- Can contain static text or handlebars templates\n- Examples:\n  - `\"processed/\"` - places files under a processed folder\n  - `\"archive/{{date 'YYYY-MM-DD'}}/\"` - organizes by date\n\nIMPORTANT: The original file's directory structure is not preserved. Only the\nfilename is appended to this prefix in the backup location.\n"}}},"Wrapper":{"type":"object","description":"Configuration for Wrapper exports","properties":{"function":{"type":"string","description":"Function name to invoke in the wrapper export"},"configuration":{"type":"object","description":"Wrapper-specific configuration payload","additionalProperties":true}}},"Parsers":{"type":"array","description":"Configuration for parsing XML payloads (i.e. files, HTTP responses, etc.). Use this field when you need to process XML data\nand transform it into a JSON records.\n\n**Implementation notes**\n\n- This is where you configure how to parse XML data in your resource\n- Although defined as an array, you typically only need a single parser configuration\n- Currently only XML parsing is supported\n- Only configure this field when working with XML data that needs structured parsing\n","items":{"type":"object","properties":{"version":{"type":"string","description":"Version identifier for the parser configuration format. Currently only version \"1\" is supported.\n\nAlways set this field to \"1\" as it's the only supported version at this time.\n","enum":["1"]},"type":{"type":"string","description":"Defines the type of parser to use. Currently only \"xml\" is supported.\n\nWhile the system is designed to potentially support multiple parser types in the future,\nat this time only XML parsing is implemented, so this field must be set to \"xml\".\n","enum":["xml"]},"name":{"type":"string","description":"Optional identifier for the parser configuration. This field is primarily for documentation\npurposes and is not functionally used by the system.\n\nThis field can be omitted in most cases as it's not required for parser functionality.\n"},"rules":{"type":"object","description":"Configuration rules that determine how XML data is parsed and converted to JSON.\nThese settings control the structure and format of the resulting JSON records.\n\n**Parsing options**\n\nThere are two main parsing strategies available:\n- **Automatic parsing**: Simple but produces more complex output\n- **Custom parsing**: More control over the resulting JSON structure\n","properties":{"V0_json":{"type":"boolean","description":"Controls the XML parsing strategy.\n\n- When set to **true** (Automatic): XML data is automatically converted to JSON without\n  additional configuration. This is simpler to set up but typically produces more complex\n  and deeply nested JSON that may be harder to work with.\n\n- When set to **false** (Custom): Gives you more control over how the XML is converted to JSON.\n  This requires additional configuration (like listNodes) but produces cleaner, more\n  predictable JSON output.\n\nMost implementations use the Custom approach (false) for better control over the output format.\n"},"listNodes":{"type":"array","description":"Specifies which XML nodes should be treated as arrays (lists) in the output JSON.\n\nIt's not always possible to automatically determine if an XML node should be a single value\nor an array. Use this field to explicitly identify nodes that should be treated as arrays,\neven if they appear only once in the XML.\n\nEach entry should be a simplified XPath expression pointing to the node that should be\ntreated as an array.\n\nOnly relevant when V0_json is set to false (Custom parsing).\n","items":{"type":"string"}},"includeNodes":{"type":"array","description":"Limits which XML nodes are included in the output JSON.\n\nFor large XML documents, you can use this field to extract only the nodes you need,\nreducing the size and complexity of the resulting JSON. Only nodes specified here\n(and their children) will be included in the output.\n\nEach entry should be a simplified XPath expression pointing to nodes to include.\n\nOnly relevant when V0_json is set to false (Custom parsing).\n","items":{"type":"string"}},"excludeNodes":{"type":"array","description":"Specifies which XML nodes should be excluded from the output JSON.\n\nSometimes it's easier to specify which nodes to exclude rather than which to include.\nUse this field to identify nodes that should be omitted from the output JSON.\n\nEach entry should be a simplified XPath expression pointing to nodes to exclude.\n\nOnly relevant when V0_json is set to false (Custom parsing).\n","items":{"type":"string"}},"stripNewLineChars":{"type":"boolean","description":"Controls whether newline characters are removed from text values.\n\nWhen set to true, all newline characters (\\n, \\r, etc.) will be removed from\ntext content in the XML before conversion to JSON.\n","default":false},"trimSpaces":{"type":"boolean","description":"Controls whether leading and trailing whitespace is trimmed from text values.\n\nWhen set to true, all values will have leading and trailing whitespace removed\nbefore conversion to JSON.\n","default":false},"attributePrefix":{"type":"string","description":"Specifies a character sequence to prepend to XML attribute names when converted to JSON properties.\n\nIn XML, both elements and attributes can exist at the same level, but in JSON this distinction is lost.\nTo maintain the distinction between element data and attribute data in the resulting JSON, this prefix\nis added to attribute names during conversion.\n\nFor example, with attributePrefix set to \"Att-\" and an XML element like:\n```xml\n<product id=\"123\">Laptop</product>\n```\n\nThe resulting JSON would be:\n```json\n{\n  \"product\": \"Laptop\",\n  \"Att-id\": \"123\"\n}\n```\n\nThis helps maintain the distinction between element content and attribute values in the\nconverted JSON, making it easier to reference specific data in downstream processing steps.\n"},"textNodeName":{"type":"string","description":"Specifies the property name to use for element text content when an element has both\ntext content and child elements or attributes.\n\nWhen an XML element contains both text content and other nested elements or attributes,\nthis field determines what property name will hold the text content in the resulting JSON.\n\nFor example, with textNodeName set to \"value\" and an XML element like:\n```xml\n<item id=\"123\">\n  Laptop\n  <category>Electronics</category>\n</item>\n```\n\nThe resulting JSON would be:\n```json\n{\n  \"item\": {\n    \"value\": \"Laptop\",\n    \"category\": \"Electronics\",\n    \"id\": \"123\"\n  }\n}\n```\n\nThis allows for unambiguous parsing of complex XML structures that mix text content with\nchild elements. Choose a name that's unlikely to conflict with actual element names in your XML.\n"}}}}}},"Mappings":{"type":"array","description":"Array of field mapping configurations for transforming data from one format into another.\n\n**Guidance**\n\nThis schema is designed around RECURSION as its core architectural principle. Understanding this recursive\nnature is essential for building effective mappings:\n\n1. The schema is self-referential by design - a mapping can contain nested mappings of the same structure\n2. Complex data structures (nested objects, arrays of objects, arrays of arrays of objects) are ALL\n   handled through this recursive pattern\n3. Each mapping handles one level of the data structure; deeper levels are handled by nested mappings\n\nWhen generating mappings programmatically:\n- For simple fields (string, number, boolean): Create single mapping objects\n- For objects: Create a parent mapping with nested 'mappings' array containing child field mappings\n- For arrays: Use 'buildArrayHelper' with extract paths defining array inputs and\n  recursive 'mappings' to define object structures\n\nThe system will process these nested structures recursively during runtime, ensuring proper construction\nof complex hierarchical data while maintaining excellent performance.\n","items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}}},"items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}},"Transform":{"type":"object","description":"Configuration for transforming data during processing operations. This object enables\nreshaping of records.\n\n**Transformation capabilities**\n\nCeligo's transformation engine offers powerful features for data manipulation:\n- Precise field mapping with JSONPath expressions\n- Support for any level of nested arrays\n- Formula-based field value generation\n- Dynamic references to flow and integration settings\n\n**Implementation approaches**\n\nThere are two distinct transformation mechanisms available:\n\n**Rule-Based Transformation (`type: \"expression\"`)**\n- **Best For**: Most transformation scenarios from simple to complex\n- **Capabilities**: Field mapping, formula calculations, lookups, nested data handling\n- **Advantages**: Visual configuration, no coding required, intuitive interface\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear mapping requirements or need to reshape data structure\n\n**Script-Based Transformation (`type: \"script\"`)**\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Capabilities**: Full programmatic control, custom processing, complex business rules\n- **Advantages**: Maximum flexibility, can implement any transformation logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Visual transformation tools aren't sufficient for your use case\n","properties":{"type":{"type":"string","description":"Determines which transformation mechanism to use. This choice affects which properties\nmust be configured and how transformation logic is implemented.\n\n**Available types**\n\n**Rule-Based Transformation (`\"expression\"`)**\n- **Required Config**: The `expression` object with mapping definitions\n- **Behavior**: Applies declarative rules to reshape data\n- **Best For**: Most transformation scenarios from simple to complex\n- **Advantages**: Visual configuration, no coding required\n\n**Script-Based Transformation (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to transform data\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard data transformations, use `\"expression\"`\n2. For complex logic or specialized processing, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based transformations. This object enables reshaping data\nwithout requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define transformation rules that\ncan map, modify, and generate data elements.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Field mappings define how input data is transformed to target fields\n- Formulas can be used to calculate or generate new values\n- Lookups can enrich data by fetching related information\n- Mode determines how records are processed (create new or modify existing)\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"2\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"2\"\nfor current implementations.\n","enum":["2"]},"rulesTwoDotZero":{"type":"object","description":"Configuration for version 2 transformation rules. This object contains the core logic\nfor how data is mapped, enriched, and transformed.\n\n**Capabilities**\n\nTransformation 2.0 provides:\n- Precise field mapping with JSONPath expressions\n- Support for deeply nested data structures\n- Formula-based field generation\n- Dynamic lookups for data enrichment\n- Multiple operating modes to fit different scenarios\n","properties":{"mode":{"type":"string","description":"Transformation mode that determines how records are handled during processing.\n\n**Available modes**\n\n**Create Mode (`\"create\"`)**\n- **Behavior**: Builds entirely new output records from inputs\n- **Use When**: Output structure differs significantly from input\n- **Advantage**: Clean slate approach, no field inheritance\n\n**Modify Mode (`\"modify\"`)**\n- **Behavior**: Makes targeted edits to existing records\n- **Use When**: Output structure should remain similar to input\n- **Advantage**: Preserves unmapped fields from the original record\n","enum":["create","modify"]},"mappings":{"$ref":"#/components/schemas/Mappings"},"lookups":{"allOf":[{"description":"Shared lookup tables used across all mappings defined in the transformation rules.\n\n**Purpose**\n\nLookups provide centralized value translation that can be referenced from any mapping\nin your transformation configuration. They enable consistent translation of codes, IDs,\nand values between systems without duplicating translation logic.\n\n**Usage in transformations**\n\nLookups are particularly valuable in transformations for:\n\n- **Data Normalization**: Standardizing values from diverse source systems\n- **Code Translation**: Converting between different coding systems (e.g., status codes)\n- **Field Enrichment**: Adding descriptive values based on ID or code lookups\n- **Cross-Reference Resolution**: Mapping identifiers between integrated systems\n\n**Implementation**\n\nLookups are defined once in this array and referenced by name in mappings:\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"statusMapping\",\n    \"map\": {\n      \"A\": \"Active\",\n      \"I\": \"Inactive\",\n      \"P\": \"Pending\"\n    },\n    \"default\": \"Unknown Status\"\n  }\n]\n```\n\nThen referenced in mappings using the lookupName property:\n\n```json\n{\n  \"generate\": \"status\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.statusCode\",\n  \"lookupName\": \"statusMapping\"\n}\n```\n\nThe system automatically applies the lookup during transformation processing.\n\nFor complete details on lookup properties and behavior, see the Lookups schema.\n"},{"$ref":"#/components/schemas/Lookups"}]}}}}},"script":{"type":"object","description":"Configuration for programmable script-based transformations. This object enables complex, custom\ntransformation logic beyond what expression-based transformations can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to transform data according to\nspecialized business rules or complex algorithms.\n\n**Implementation approach**\n\nScript-based transformation works by:\n1. Executing the specified function from the referenced script\n2. Passing input data to the function\n3. Using the function's return value as the transformed output\n\n**Common use cases**\n\nScript transformation is ideal for:\n- Complex business logic that can't be expressed through mappings\n- Algorithmic transformations requiring computation\n- Dynamic transformations based on external factors\n- Legacy system data format compatibility\n- Multi-stage processing with intermediate steps\n\nOnly use script-based transformation when expression-based transformation is insufficient.\nScript transformation requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the transformation logic.\n\nThe referenced script should contain the function specified in the\n'function' property.\n","format":"objectid"},"function":{"type":"string","description":"Name of the function within the script to execute for transformation. This function\nmust exist in the script referenced by _scriptId.\n"}}}}},"Lookups":{"type":"array","description":"Configuration for value-to-value transformations using lookup tables.\n\n**Purpose**\n\nLookups provide a way to translate values from one system to another. They transform\ninput values into output values using either static mapping tables or\ndynamic lookup caches.\n\n**Lookup mechanisms**\n\nThere are two distinct lookup mechanisms available:\n\n1. **Static Lookups**: Define a simple key-value map object and store it as part of your resource\n   - Best for: Small, fixed sets of values that rarely change\n   - Implementation: Configure the `map` object with input-to-output value mappings\n   - Example: Country codes, status values, simple translations\n\n2. **Dynamic Lookups**: Reference an existing 'Lookup Cache' resource in your Celigo account\n   - Best for: Large datasets, frequently changing values, or complex reference data\n   - Implementation: Configure `_lookupCacheId` to reference cached data maintained independently\n   - Example: Product catalogs, customer databases, pricing information\n\n**Property usage**\n\nThere are two mutually exclusive ways to configure lookups, depending on which mechanism you choose:\n\n1. **For Static Mappings**: Configure the `map` property with a direct key-value object\n   ```json\n   \"map\": {\"US\": \"United States\", \"CA\": \"Canada\"}\n   ```\n\n2. **For Dynamic Lookups**: Configure the following properties:\n   - `_lookupCacheId`: Reference to the lookup cache resource\n   - `extract`: JSON path to extract specific value from the returned lookup object\n\n**When to use**\n\nLookups are ideal for:\n\n1. **Value Translation**: Mapping codes or IDs to human-readable values\n\n2. **Data Enrichment**: Adding related information to records during processing\n\n3. **Normalization**: Ensuring consistent formatting of values across systems\n\n**Implementation details**\n\nLookups can be referenced in:\n\n1. **Field Mappings**: Direct use in field transformation configurations\n\n2. **Handlebars Templates**: Use within templates with the syntax:\n   ```\n   {{lookup 'lookupName' record.fieldName}}\n   ```\n\n**Example usage**\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"countryCodeToName\",\n    \"map\": {\n      \"US\": \"United States\",\n      \"CA\": \"Canada\",\n      \"UK\": \"United Kingdom\"\n    },\n    \"default\": \"Unknown Country\",\n    \"allowFailures\": true\n  },\n  {\n    \"name\": \"productDetails\",\n    \"_lookupCacheId\": \"60a2c4e6f321d800129a1a3c\",\n    \"extract\": \"$.details.price\",\n    \"allowFailures\": false\n  }\n]\n```\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique identifier for the lookup table within this configuration.\n\nThis name must be unique within the scope where the lookup is defined and is used to reference\nthe lookup in handlebars templates with the syntax {{lookup 'name' value}}.\n\nChoose descriptive names that indicate the transformation purpose, such as:\n- \"countryCodeToName\" for country code to full name conversion\n- \"statusMapping\" for status code translations\n- \"departmentCodes\" for department code to name mapping\n"},"map":{"type":"object","description":"The lookup mapping table as key-value pairs.\n\nThis object contains the input values as keys and their corresponding\noutput values. When a input value matches a key in this object,\nit will be replaced with the corresponding value.\n\nThe map should be kept to a reasonable size (typically under 100 entries)\nfor optimal performance. For larger mapping requirements, consider using\ndynamic lookups instead.\n\nMaps can include:\n- Simple code to name conversions: {\"US\": \"United States\"}\n- Status transformations: {\"A\": \"Active\", \"I\": \"Inactive\"}\n- ID to name mappings: {\"100\": \"Marketing\", \"200\": \"Sales\"}\n\nValues can be strings, numbers, or booleans, but all are stored as strings\nin the configuration.\n"},"_lookupCacheId":{"type":"string","description":"Reference to a LookupCache resource that contains the reference data for the lookup.\n\n**Purpose**\n\nThis field connects the lookup to an external data source that has been cached in the system.\nUnlike static lookups that use the `map` property, dynamic lookups can reference large datasets\nor frequently changing information without requiring constant updates to the integration.\n\n**Implementation details**\n\nThe LookupCache resource referenced by this ID contains:\n- The data records to be used as a reference source\n- Configuration for how the data should be indexed and accessed\n- Caching parameters to balance performance with data freshness\n\n**Usage patterns**\n\nCommonly used to reference:\n- Product catalogs or SKU databases\n- Customer or account information\n- Pricing tables or discount rules\n- Complex business logic lookup tables\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n","format":"objectid"},"extract":{"type":"string","description":"JSON path expression that extracts a specific value from the cached lookup object.\n\n**Purpose**\n\nWhen using dynamic lookups with a LookupCache, this JSON path identifies which field to extract\nfrom the cached object after it has been retrieved using the lookup key.\n\n**Implementation details**\n\n- Must use JSON path syntax (similar to mapping extract fields)\n- Operates on the cached object returned by the lookup operation\n- Examples:\n  - \"$.name\" - Extract the name field from the top level\n  - \"$.details.price\" - Extract a nested price field\n  - \"$.attributes[0].value\" - Extract a value from the first element of an array\n\n**Usage scenario**\n\nWhen a lookup cache contains complex objects:\n```json\n// Cache entry for key \"PROD-123\":\n{\n  \"id\": \"PROD-123\",\n  \"name\": \"Premium Widget\",\n  \"details\": {\n    \"price\": 99.99,\n    \"currency\": \"USD\",\n    \"inStock\": true\n  }\n}\n```\n\nSetting extract to \"$.details.price\" would return 99.99 as the lookup result.\n\nIf no extract is provided, the entire cached object is returned as the lookup result.\n"},"default":{"type":"string","description":"Default value to use when the source value is not found in the lookup map.\n\nThis value is used as a fallback when:\n1. The source value doesn't match any key in the map\n2. allowFailures is set to true\n\nSetting an appropriate default helps prevent flow failures due to unexpected\nvalues and provides predictable behavior for edge cases.\n\nCommon default patterns include:\n- Descriptive unknowns: \"Unknown Country\", \"Unspecified Status\"\n- Original value indicators: \"{Original Value}\", \"No mapping found\"\n- Neutral values: \"Other\", \"N/A\", \"Miscellaneous\"\n\nIf allowFailures is false and no default is specified, the flow will fail\nwhen encountering unmapped values.\n"},"allowFailures":{"type":"boolean","description":"When true, missing lookup values will use the default value rather than causing an error.\n\n**Behavior control**\n\nThis field determines how the system handles source values that don't exist in the map:\n\n- true: Use the default value for missing mappings and continue processing\n- false: Treat missing mappings as errors, failing the record\n\n**Recommendation**\n\nSet this to true when:\n- New source values might appear over time\n- Data quality issues could introduce unexpected values\n- Processing should continue even with imperfect mapping\n\nSet this to false when:\n- Complete data accuracy is critical\n- All possible source values are known and controlled\n- Missing mappings indicate serious data problems that should be addressed\n\nThe best practice is typically to set allowFailures to true with a meaningful\ndefault value, so flows remain operational while alerting you to missing mappings.\n"}}}},"Form":{"type":"object","description":"Configuration for creating user-friendly settings forms that make it easier for less technical users\nto configure integration resources.\n\n**Settings form builder**\n\nThe Settings Form Builder allows you to create or edit user-friendly fields that prompt for text entry\nor selections that will be returned as settings applied to this resource. Your forms can include any\nfield types that you see elsewhere in integrator.io, such as:\n\n- Text fields\n- Dropdown selections\n- Checkboxes\n- Radio buttons\n- Date pickers\n- Multi-select fields\n- Search fields\n\nForm fields make it much easier for less technical users to work with your integration settings by:\n\n- Providing clear labels and help text\n- Enforcing validation rules\n- Offering pre-defined selection options\n- Grouping related settings logically\n- Supporting conditional visibility\n- Creating a consistent user experience\n","properties":{"form":{"type":"object","description":"Configuration that defines the structure, fields, and behavior of the settings form.\n\nThis object contains the complete definition of the form's layout, fields, validation rules,\nand interactive behaviors. The specific structure depends on the form complexity and can include\nfield definitions, sections, conditional display logic, and default values.\n\nThe form configuration is typically created and managed through the visual Form Builder interface\nrather than edited directly as JSON.\n","properties":{"fieldMap":{"type":"object","description":"A mapping of field identifiers to their configuration objects.\nEach key in this object represents a unique field ID, and the value contains\nall the configuration settings for that specific form field.\n","additionalProperties":{"type":"object","description":"Configuration for an individual form field.\n","properties":{"id":{"type":"string","description":"Unique identifier for this field within the form.\nThis value typically matches the key in the fieldMap object.\n"},"name":{"type":"string","description":"Name of the field, used as the property name when generating the settings object\nfrom the submitted form data.\n"},"type":{"type":"string","description":"The type of form control to render for this field.\n","enum":["text","checkbox","radiogroup","relativeuri","editor","keyvalue","select","multiselect","toggle","datetime","date"]},"label":{"type":"string","description":"Display label shown next to the field in the form.\n"},"description":{"type":"string","description":"Detailed explanation text that appears below the field, providing more context\nthan the label or helpText.\n"},"helpText":{"type":"string","description":"Explanatory text that appears when hovering over the help icon next to the field.\nUsed to provide additional guidance on how to use the field.\n"},"required":{"type":"boolean","description":"When true, the field must have a value before the form can be submitted.\n","default":false},"multiline":{"type":"boolean","description":"For text fields, determines whether the input should be a multi-line text area\ninstead of a single-line input.\n","default":false},"rowsMax":{"type":"integer","description":"For multiline text fields, specifies the maximum number of visible rows.\n"},"inputType":{"type":"string","description":"For text fields, specifies the HTML input type attribute to apply additional\nvalidation or specialized input behavior.\n","enum":["text","number","email","password","tel","url"]},"delimiter":{"type":"string","description":"For text fields, specifies a character to use for splitting the input into an array.\nUsed for collecting multiple values in a single text field.\n"},"mode":{"type":"string","description":"For editor fields, specifies the type of content being edited for syntax highlighting.\n","enum":["json","xml","csv","text"]},"keyName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the key input.\n"},"valueName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the value input.\n"},"showDelete":{"type":"boolean","description":"For keyvalue fields, determines whether to show a delete button for each key-value pair.\n"},"doNotAllowFutureDates":{"type":"boolean","description":"For date and datetime fields, restricts selection to dates not in the future.\n"},"skipTimezoneConversion":{"type":"boolean","description":"For datetime fields, prevents automatic timezone conversion of the date value.\n"},"options":{"type":"array","description":"For fields that present choices (select, multiselect, radiogroup, toggle), defines\nthe available options.\n","items":{"oneOf":[{"title":"Option group","type":"object","properties":{"items":{"type":"array","items":{"oneOf":[{"title":"String value","type":"string"},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]},"description":"Array of option values/labels to display in the selection control.\n"}}},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]}},"visibleWhen":{"type":"array","description":"Conditional display rules that determine when this field should be visible.\nIf empty or not provided, the field is always visible.\n","items":{"type":"object","properties":{"field":{"type":"string","description":"The ID of another field whose value controls the visibility of this field.\n"},"is":{"type":"array","items":{"type":"string"},"description":"Array of values - if the referenced field has any of these values,\nthis field will be visible.\n"}}}}}}},"layout":{"type":"object","description":"Defines how the form fields are arranged and grouped in the UI.\nThe layout can organize fields into columns, sections, or other visual groupings.\n","properties":{"type":{"type":"string","description":"The type of layout to use for the form.\n","enum":["column"]},"containers":{"type":"array","description":"Array of container objects that group fields or contain nested containers.\nEach container can represent a column, box, indented section, or collapsible section.\n","items":{"type":"object","properties":{"type":{"type":"string","description":"The visual style of the container.\n","enum":["indent","box","collapse"]},"label":{"type":"string","description":"The heading text displayed for this container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this container.\nEach ID must correspond to a key in the fieldMap object.\n"},"containers":{"type":"array","description":"Nested containers within this container. Allows for hierarchical organization\nof fields with different visual styles.\n","items":{"type":"object","properties":{"label":{"type":"string","description":"The heading text displayed for this nested container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this nested container.\n"}}}}}}}}}},"additionalProperties":true},"init":{"type":"object","description":"Configuration for custom JavaScript initialization that executes when the form is first loaded.\n\nThis object defines a JavaScript hook that prepares the form for use, sets initial field values,\nperforms validation, or otherwise customizes the form behavior before it is displayed to the user.\n\n**Function signature**\n\nThe initialization function is invoked with a single 'options' argument containing contextual information:\n```javascript\nfunction formInit(options) {\n  // Process options and return the form object\n  return options.resource.settingsForm.form;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.resource` - The current resource being configured\n- `options.parentResource` - The parent of the current resource\n- `options.grandparentResource` - The grandparent of the current resource\n- `options.license` - For integration apps, the license provisioned to the integration\n- `options.parentLicense` - For integration apps, the parent of the license\n\n\n**Common uses**\n\n- Dynamically generate field options based on resource configuration\n- Pre-populate default values from related resources\n- Apply conditional logic that depends on resource properties\n- Add, remove, or modify form fields based on user permissions or account settings\n- Fetch external data to populate selection options\n- Implement complex validation rules that depend on resource context\n- Create branching form experiences based on user selections\n\n**Return value**\n\nThe function must return a valid form object that the UI can render.\nThrowing an exception will signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called when the form\nis initialized and should handle any custom setup logic.\n\nThe function must follow the expected signature and return a valid form object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the initialization function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}}}},"Settings":{"type":"object","description":"Configuration settings that can be accessed by hooks, filters, mappings and handlebars templates at runtime.\n\nIt enables customization of the resource's logic, allowing hooks, mappings, filters, and\nhandlebars to access and apply the settings at runtime.\n\n**Usage**\n\nThe settings object can store arbitrary JSON data that you want to save with the resource.\nWhile it's often populated through a form defined in the `settingsForm` field, you can also:\n\n- Directly provide JSON settings without using a form\n- Store configuration values used by hooks and templates\n- Create resource-specific constants and parameters\n- Maintain lookup tables or mapping structures\n- Define conditional logic parameters\n\n**Accessibility**\n\nSettings are available in:\n- All handlebars fields for building dynamic payloads\n- Field mapping expressions\n- JavaScript hooks via the options object\n- Filters and transformations\n\n**Best practices**\n\nFor non-technical users, create a custom form instead of editing the JSON directly.\nThis provides a user-friendly interface for updating settings without requiring JSON knowledge.\n","additionalProperties":true},"MockOutput":{"type":"object","description":"Sample data that simulates the output from an export for testing and configuration purposes.\n\nMock output allows you to configure and test flows without executing the actual export or\nwaiting for real-time data to arrive. This is particularly useful for:\n- Initial flow configuration and testing\n- Mapping development without requiring live data\n- Generating metadata for downstream flow steps\n- Creating realistic test scenarios\n- Documenting expected data structures\n\n**Structure**\n\nThe mock output must follow the integrator.io canonical format, which consists of a\n`page_of_records` array containing record objects. Each record object has a `record`\nproperty that contains the actual data fields.\n\n```json\n{\n  \"page_of_records\": [\n    {\n      \"record\": {\n        \"field1\": \"value1\",\n        \"field2\": \"value2\",\n        ...\n      }\n    },\n    ...\n  ]\n}\n```\n\n**Usage**\n\nWhen executing a test run or configuring a flow, integrator.io will use this mock output\ninstead of executing the export to retrieve live data. This allows you to:\n- Test mappings with representative data\n- Configure downstream flow steps without waiting for real data\n- Simulate various data scenarios\n\n**Limitations**\n\n- Maximum of 10 records\n- Maximum size of 1 MB\n- Must follow the canonical format shown above\n\nMock output can be populated automatically from preview data or entered manually.\n","properties":{"page_of_records":{"type":"array","description":"Array of record objects in the integrator.io canonical format.\n\nEach item in this array represents one record that would be processed\nby the flow during execution.\n","items":{"type":"object","properties":{"record":{"type":"object","description":"Container for the actual record data fields.\n\nThe structure of this object will vary depending on the specific\nexport configuration and the source system's data structure.\n","additionalProperties":true}}}}}},"PreSave":{"type":"object","description":"Defines a JavaScript hook that executes before the resource is saved.\n\nThis hook allows for programmatic validation, transformation, or enrichment of the\nresource itself before it is persisted. It can be used to enforce business rules,\nset derived properties, or implement cross-field validations that can't be expressed\nthrough the standard UI.\n\n**Function signature**\n\nThe preSave function is invoked with a single 'options' argument containing:\n```javascript\nfunction preSave(options) {\n  // Process options and return the modified resource\n  return options.newResource;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.newResource` - The resource being saved (with pending changes)\n- `options.oldResource` - The previous version of the resource (before changes)\n\n\n**Common uses**\n\n- Enforcing complex business rules across multiple fields\n- Automatically deriving field values based on other configuration\n- Performing validation that depends on external systems or data\n- Normalizing or standardizing configuration values\n- Adding computed or derived properties\n- Implementing versioning or change tracking\n- Dynamically looking up data using the Celigo API module to enrich configuration\n\n**Return value**\n\nThe function must return the newResource object (potentially modified) to be saved.\nThrowing an exception will prevent saving and signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called just before\nthe resource is saved.\n\nThe function must follow the expected signature and return the resource object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the preSave function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}},"Response":{"type":"object","description":"Complete export object as returned by the API","allOf":[{"$ref":"#/components/schemas/Request"},{"$ref":"#/components/schemas/ResourceResponse"},{"$ref":"#/components/schemas/IAResourceResponse"},{"type":"object","properties":{"aiDescription":{"$ref":"#/components/schemas/AIDescription"},"apim":{"$ref":"#/components/schemas/APIM"},"apiIdentifier":{"type":"string","readOnly":true,"description":"API identifier assigned to this export."},"_sourceId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the source resource this export was created from."},"_templateId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the template used to create this export."},"draft":{"type":"boolean","readOnly":true,"description":"Indicates whether this export is in draft state."},"draftExpiresAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the draft version of this export expires."},"debugUntil":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp until which debug logging is enabled for this export."}}}]},"ResourceResponse":{"type":"object","description":"Core response fields shared by all Celigo resources","properties":{"_id":{"type":"string","format":"objectId","readOnly":true,"description":"Unique identifier for the resource.\n\nThe _id is used in:\n- API endpoints that operate on a specific resource (e.g., GET, PUT, DELETE)\n- References from other resources (e.g., flows that use this resource)\n- Job history and error tracking\n\nFormat: 24-character hexadecimal string\n"},"createdAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was initially created.\n\nThis read-only field is automatically set during resource creation and cannot\nbe modified. It provides an audit trail for when the resource was first added\nto the system, which can be useful for:\n\n- Resource lifecycle management\n- Audit and compliance reporting\n- Troubleshooting integration timelines\n- Identifying older resources that may need review\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"},"lastModified":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was most recently updated.\n\nThis read-only field is automatically updated whenever any property of the\nresource is modified. It provides an audit trail that can be used for:\n\n- Determining if a resource has changed since it was last reviewed\n- Monitoring configuration changes during troubleshooting\n- Implementing cache invalidation strategies\n- Synchronizing related resources based on modification time\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix)\nand will always be equal to or later than the createdAt timestamp.\n"},"deletedAt":{"type":["string","null"],"format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was marked for deletion.\n\nWhen this field is present and contains a valid timestamp, it indicates\nthat the resource has been soft-deleted (moved to the recycle bin) but not\nyet permanently removed from the system. This allows for recovery of\naccidentally deleted resources within a specified retention period.\n\nThe deletedAt timestamp enables:\n- Filtering deleted resources from active resource listings\n- Implementing time-based retention policies for permanent deletion\n- Tracking deletion events for audit and compliance purposes\n- Resource recovery workflows with clear timeframes\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\nWhen null or absent, the resource is considered active.\n"}},"required":["_id"]},"IAResourceResponse":{"type":"object","description":"Integration app response fields for resources that are part of integration apps","properties":{"_integrationId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the specific integration instance that contains this resource.\n\nThis field is only populated for resources that are part of an integration app\ninstallation. It contains the unique identifier (_id) of the integration\nresource that was installed in the account.\n\nThe integration instance represents a specific installed instance of an\nintegration app, with its own configuration, settings, and runtime environment.\n\nThis reference enables:\n- Tracing the resource back to its parent integration instance\n- Permission and access control based on integration ownership\n- Lifecycle management (enabling/disabling, updating, or uninstalling)\n"},"_connectorId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the integration app that defines this resource.\n\nThis field is only populated for resources that are part of an integration app.\nIt contains the unique identifier (_id) of the integration app (connector)\nthat defines the structure, behavior, and templates for this resource.\n\nThe integration app is the published template that can be installed\nmultiple times across different accounts, with each installation creating\na separate integration instance (referenced by _integrationId).\n\nThis reference enables:\n- Identifying the source integration app for this resource\n- Determining which template version is being used\n- Linking to documentation, support, and marketplace information\n"}}},"AIDescription":{"type":"object","description":"AI-generated descriptions and documentation for the resource.\n\nThis object contains automatically generated content that helps users\nunderstand the purpose, behavior, and configuration of the resource without\nrequiring them to analyze the technical details. The AI-generated content\nis sanitized and safe for display in the UI.\n","properties":{"summary":{"type":"string","description":"Brief AI-generated summary of the resource's purpose and functionality.\n\nThis concise description provides a quick overview of what the resource does,\nwhat systems it interacts with, and its primary role in the integration.\nThe summary is suitable for display in list views, dashboards, and other\ncontexts where space is limited.\n\nMaximum length: 10KB\n"},"detailed":{"type":"string","description":"Comprehensive AI-generated description of the resource's functionality.\n\nThis detailed explanation covers the resource's purpose, configuration details,\ndata flow patterns, filtering logic, and other technical aspects. It provides\nin-depth information suitable for documentation, tooltips, or detailed views\nin the administration interface.\n\nThe content may include HTML formatting for improved readability.\n\nMaximum length: 10KB\n"},"generatedOn":{"type":"string","format":"date-time","description":"Timestamp indicating when the AI description was generated.\n\nThis field helps track the freshness of the AI-generated content and\ndetermine when it might need to be regenerated due to changes in the\nresource's configuration or behavior.\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"}}},"APIM":{"type":"array","description":"Read-only field that stores information about the integration resources\npublished in the API Management (APIM) platform.\n\nThis field tracks the relationship between integrator.io resources and their\npublished counterparts in the Gravitee API Management platform, which is\ntightly integrated with the Celigo UI. When resources are \"pushed\" to Gravitee,\nthis field is populated with the relevant identifiers and statuses.\n","items":{"type":"object","properties":{"apiId":{"type":"string","description":"Identifier for the API where this integrator.io resource is published in the APIM.\n\nThis is a Gravitee resource identifier (not prefixed with underscore like Celigo IDs)\nthat uniquely identifies the API in the API Management platform.\n"},"flowId":{"type":"string","description":"Identifier for the flow within the API where this integrator.io resource is linked.\n\nWhen an API has multiple integrator.io resources linked, each resource is associated\nwith a specific flow in the API, identified by this field. This is a Gravitee\nresource identifier.\n"},"status":{"type":"string","description":"Indicates the publishing stage of the integrator.io resource in APIM.\n\nPossible values:\n- 'oaspending': The resource is published but the OpenAPI Specification (OAS) is not\n  yet published. The apiId will be updated with the API ID created in APIM.\n- 'published': The OpenAPI Specification for the integrator.io resource has been\n  successfully uploaded to APIM.\n","enum":["oaspending","published"]}}}},"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}},"responses":{"400-bad-request":{"description":"Bad request. The server could not understand the request because of malformed syntax or invalid parameters.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}},"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"404-not-found":{"description":"Not found. The requested resource does not exist or is not visible to the caller.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}}},"paths":{"/v1/exports/{_id}":{"put":{"summary":"Update an export","description":"Updates an existing export with the provided configuration.\nThis is used for major updates to an export's structure or behavior.\n","operationId":"updateExport","tags":["Exports"],"parameters":[{"name":"_id","in":"path","description":"The unique identifier of the export","required":true,"schema":{"type":"string","format":"objectId"}}],"requestBody":{"required":true,"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Request"}}}},"responses":{"200":{"description":"Export updated successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Response"}}}},"400":{"$ref":"#/components/responses/400-bad-request"},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"$ref":"#/components/responses/404-not-found"}}}}}}
````

## Delete an export

> Deletes an export. The export is soft-deleted and retained in the recycle bin\
> for 30 days before permanent removal. If the export is currently in use by\
> any flows, those flows may fail until reconfigured.<br>

```json
{"openapi":"3.1.0","info":{"title":"Exports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"404-not-found":{"description":"Not found. The requested resource does not exist or is not visible to the caller.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}},"schemas":{"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}}},"paths":{"/v1/exports/{_id}":{"delete":{"summary":"Delete an export","description":"Deletes an export. The export is soft-deleted and retained in the recycle bin\nfor 30 days before permanent removal. If the export is currently in use by\nany flows, those flows may fail until reconfigured.\n","operationId":"deleteExport","tags":["Exports"],"parameters":[{"name":"_id","in":"path","description":"The unique identifier of the export","required":true,"schema":{"type":"string","format":"objectId"}}],"responses":{"204":{"description":"Export deleted successfully"},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"$ref":"#/components/responses/404-not-found"}}}}}}
```

## Patch an export

> Partially updates an export using a JSON Patch document (RFC 6902).\
> Only the \`replace\` operation is supported, and only on the following\
> whitelisted paths:\
> \
> \| Path | Description |\
> \|------|-------------|\
> \| \`/debugUntil\` | Debug logging expiry (ISO-8601, max 1 hour from now) |\
> \| \`/assistantMetadata\` | Assistant metadata object |\
> \
> All other paths are rejected with \`422\`.

```json
{"openapi":"3.1.0","info":{"title":"Exports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"JsonPatchRequest":{"type":"array","description":"A JSON Patch document (RFC 6902). Send an array of patch\noperations. Only the `replace` operation is supported, and only\non whitelisted fields — all other paths are rejected with 422.","minItems":1,"items":{"$ref":"#/components/schemas/JsonPatchOperation"}},"JsonPatchOperation":{"type":"object","description":"A single JSON Patch operation (RFC 6902).","required":["op","path"],"properties":{"op":{"type":"string","enum":["replace"],"description":"The operation to perform. Only `replace` is supported."},"path":{"type":"string","description":"JSON Pointer (RFC 6901) to the field to patch. Only\nwhitelisted paths are accepted — unlisted paths return\n`422` with `\"<path> is not a whitelisted property\"`."},"value":{"description":"The new value to set at the given path."}}},"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"404-not-found":{"description":"Not found. The requested resource does not exist or is not visible to the caller.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}},"422-unprocessable-entity":{"description":"Unprocessable entity. The request was well-formed but was unable to be followed due to semantic errors.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}}},"paths":{"/v1/exports/{_id}":{"patch":{"summary":"Patch an export","description":"Partially updates an export using a JSON Patch document (RFC 6902).\nOnly the `replace` operation is supported, and only on the following\nwhitelisted paths:\n\n| Path | Description |\n|------|-------------|\n| `/debugUntil` | Debug logging expiry (ISO-8601, max 1 hour from now) |\n| `/assistantMetadata` | Assistant metadata object |\n\nAll other paths are rejected with `422`.","operationId":"patchExport","tags":["Exports"],"parameters":[{"name":"_id","in":"path","description":"The unique identifier of the export","required":true,"schema":{"type":"string","format":"objectId"}}],"requestBody":{"required":true,"content":{"application/json":{"schema":{"$ref":"#/components/schemas/JsonPatchRequest"}}}},"responses":{"204":{"description":"Export patched successfully"},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"$ref":"#/components/responses/404-not-found"},"422":{"$ref":"#/components/responses/422-unprocessable-entity"}}}}}}
```

## Clone an export

> Creates a copy of an existing export.\
> Supports optionally remapping referenced connections (via connectionMap).<br>

````json
{"openapi":"3.1.0","info":{"title":"Exports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"CloneRequest":{"type":"object","description":"Request body for cloning an export.","properties":{"name":{"type":"string","description":"Optional name for the cloned resource. If omitted, the server may generate a default clone name."},"connectionMap":{"type":"object","description":"Optional mapping of original connection ids to replacement connection ids.\nKeys are source connection ids on the original resource; values are target connection ids.\n","additionalProperties":{"type":"string"}}},"additionalProperties":true},"CloneResponse":{"description":"Response body for a clone operation. Some clone endpoints return the cloned resource, while others may return a list of related created resources.","oneOf":[{"$ref":"#/components/schemas/Response"},{"type":"array","items":{"type":"object","properties":{"model":{"type":"string","description":"Model name of the created resource (e.g., Flow, Export, Import)."},"_id":{"type":"string","format":"objectId","description":"Unique id of the created resource."},"name":{"type":"string","description":"Optional name of the created resource."}},"required":["_id"]}}]},"Response":{"type":"object","description":"Complete export object as returned by the API","allOf":[{"$ref":"#/components/schemas/Request"},{"$ref":"#/components/schemas/ResourceResponse"},{"$ref":"#/components/schemas/IAResourceResponse"},{"type":"object","properties":{"aiDescription":{"$ref":"#/components/schemas/AIDescription"},"apim":{"$ref":"#/components/schemas/APIM"},"apiIdentifier":{"type":"string","readOnly":true,"description":"API identifier assigned to this export."},"_sourceId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the source resource this export was created from."},"_templateId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the template used to create this export."},"draft":{"type":"boolean","readOnly":true,"description":"Indicates whether this export is in draft state."},"draftExpiresAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the draft version of this export expires."},"debugUntil":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp until which debug logging is enabled for this export."}}}]},"Request":{"type":"object","description":"Fields that can be sent when creating or updating an export","properties":{"name":{"type":"string","description":"Descriptive identifier for the export resource in human-readable format.\n\nThis string serves as the primary display name for the export across the application UI and is used in:\n- API responses when listing exports\n- Error and audit logs for traceability\n- Flow builder UI components\n- Job history and monitoring dashboards\n\nWhile not required to be globally unique in the system, using descriptive, unique names is strongly recommended\nfor clarity when managing multiple integrations. The name should indicate the data source and purpose.\n\nMaximum length: 255 characters\nAllowed characters: Letters, numbers, spaces, and basic punctuation\n"},"description":{"type":"string","description":"Optional free-text field that provides additional context about the export's purpose and functionality.\n\nWhile not used for operational functionality in the API, this field serves several important purposes:\n- Helps document the intended data flow for this export\n- Provides context for other developers and systems interacting with this resource\n- Appears in the admin UI and export listings for easier identification\n- Can be used by AI agents to better understand the export's purpose when making recommendations\n\nBest practice is to include information about:\n- The source system and data being exported\n- The intended destination for this data\n- Any special filtering or business rules applied\n- Dependencies on other systems or processes\n\nMaximum length: 10240 characters\n","maxLength":10240},"_connectionId":{"format":"objectId","type":"string","description":"Reference to the connection resource that this export will use to access the external system.\n\nThis field contains the unique identifier of a connection resource that must exist in the system prior to creating the export.\nThe connection provides:\n- Authentication credentials and methods for the external system\n- Base URL and connectivity settings\n- Rate limiting and retry configurations\n- Connection-specific headers and parameters\n\nThe connection type must be compatible with the adaptorType specified for this export.\nFor example, if adaptorType is \"HTTPExport\", _connectionId must reference a connection with type \"http\".\n\nThis field is not required for webhook/listener exports.\n\nFormat: 24-character hexadecimal string\n"},"adaptorType":{"type":"string","description":"Specifies the underlying technology adapter that processes this export's operations.\n\nThis field determines:\n- Which connection types are compatible with this export\n- Which API endpoints and protocols will be used\n- Which export-specific configuration objects must be provided\n- The available features and capabilities of the export\n\nThe value must match an available adapter in the system and should correspond to the\nexternal system being accessed. For example:\n- \"HTTPExport\" for generic REST/SOAP APIs\n- \"SalesforceExport\" for Salesforce-specific operations\n- \"NetSuiteExport\" for NetSuite-specific operations\n- \"FTPExport\" for file transfers via FTP/SFTP\n- \"WebhookExport\" for realtime event listeners that receive data via incoming HTTP requests.\n\nWhen creating an export, this field must be set correctly and cannot be changed afterward\nwithout creating a new export resource.\n\nIMPORTANT: When using a specific adapter type (e.g., \"SalesforceExport\"), you must also\nprovide the corresponding configuration object (e.g., \"salesforce\").\n","enum":["HTTPExport","FTPExport","AS2Export","S3Export","NetSuiteExport","SalesforceExport","JDBCExport","RDBMSExport","MongodbExport","DynamodbExport","WrapperExport","SimpleExport","WebhookExport","FileSystemExport"]},"type":{"type":"string","description":"Defines the fundamental operational mode of the export resource. This field determines:\n- What data is extracted and how\n- Which configuration objects are required\n- How the export appears and functions in the flow builder UI\n- The export's scheduling and execution behavior\n\n**Export types and their configurations**\n\n**Standard Export (undefined/null)**\n- **Behavior**: Retrieves all available records from the source system or structured file data that needs parsing. Default behavior is to get all records from the source system.\n- **UI Appearance**: \"Export\", \"Lookup\", or \"Transfer\" (depending on configuration)\n- **Use Case**: General purpose data extraction, full data synchronization, or structured file parsing (CSV, XML, JSON, etc.)\n- **Important Note**: For file exports that PARSE file contents into records (e.g., CSV files from NetSuite file cabinet), use this standard export type (null/undefined) with the connector's file configuration (e.g., netsuite.file). Do NOT use type=\"blob\" for parsed file exports.\n\n**\"delta\"**\n- **Behavior**: Retrieves only records changed since the last execution\n- **UI Appearance**: \"Export\" or \"Lookup\" (based on isLookup field value)\n- **Required Config**: Must provide the \"delta\" object with dateField configuration\n- **Use Case**: Incremental data synchronization, change detection\n- **Dependencies**: Requires a system that supports timestamp-based filtering\n\n**\"test\"**\n- **Behavior**: Retrieves a limited subset of records (for testing purposes)\n- **UI Appearance**: \"Export\" or \"Lookup\" (based on isLookup field value)\n- **Required Config**: Must provide the \"test\" object with limit configuration\n- **Use Case**: Integration development, testing, and validation\n\n**\"once\"**\n- **Behavior**: Retrieves records one time and marks them as processed in the source\n- **UI Appearance**: \"Export\" or \"Lookup\" (based on isLookup field value)\n- **Required Config**: Must provide the \"once\" object with booleanField configuration\n- **Use Case**: One-time exports, ensuring records aren't processed twice\n- **Dependencies**: Requires a system with updateable boolean/flag fields\n\n**\"blob\"**\n- **Behavior**: Retrieves raw files without parsing them into structured data records. The file content is transferred as-is without any parsing or transformation.\n- **UI Appearance**: \"Transfer\" flow step\n- **Required Config**: Configuration varies by connector (e.g., filepath for FTP, http.type=\"file\" for HTTP, netsuite.blob for NetSuite)\n- **Use Case**: Raw file transfers for binary files (images, PDFs, executables) where file content should NOT be parsed into data records\n- **Important Note**: Do NOT use \"blob\" when you want to parse file contents into records. For file parsing (CSV, XML, JSON files), leave type as null/undefined and configure the connector's file object (e.g., netsuite.file for NetSuite). The \"blob\" type is specifically for transferring files without parsing them.\n\n**\"webhook\"**\n- **Behavior**: Creates an endpoint that listens for incoming HTTP requests\n- **UI Appearance**: \"Listener\" flow step\n- **Required Config**: Must provide the \"webhook\" object with security configurations\n- **Use Case**: Real-time integration, event-driven architecture\n- **Dependencies**: Requires external system capable of making HTTP calls\n\n**\"distributed\"**\n- **Behavior**: Creates an endpoint that listens for incoming requests from NetSuite or Salesforce\n- **UI Appearance**: \"Listener\" flow step\n- **Required Config**: Must provide the \"distributed\" object with security configurations\n- **Use Case**: Real-time integration, event-driven architecture for NetSuite or Salesforce\n- **Dependencies**: Requires NetSuite or Salesforce to be configured to send events to the endpoint\n\n**\"simple\"**\n- **Behavior**: Allows for direct file uploads via the data loader UI\n- **UI Appearance**: \"Data loader\" flow step\n- **Required Config**: Must provide the \"simple\" object with file format configuration\n- **Use Case**: Manual data uploads, user-driven data integration\n\nThe value directly affects which configuration objects must be provided in the export resource.\nFor example, if type=\"delta\", you must include a valid \"delta\" object in your configuration.\n","enum":["webhook","test","delta","once","tranlinedelta","simple","blob","distributed","stream"]},"pageSize":{"type":"integer","description":"Controls the number of records in each data page when streaming data between systems.\n\nThis field directly impacts how data is streamed from the source to destination system:\n- Records are exported in batches (pages) of this size\n- Each page is immediately sent to the destination system upon completion\n- Pages are capped at a maximum size of 5 MB regardless of record count\n- Processing continues with the next page until all data is transferred\n\nConsiderations for setting this value:\n- The destination system's API often imposes limits on batch sizes\n  (e.g., NetSuite and Salesforce have specific record limits per API call)\n- Larger values improve throughput for simple records but may cause timeouts with complex data\n- Smaller values provide more granular error recovery but increase the number of API calls\n- Finding the optimal value typically requires balancing source system export speed with\n  destination system import capacity\n\nThe value must be a positive integer. If not specified, the default value is 20.\nThere is no built-in maximum value, but practical limits are determined by:\n1. The 5 MB maximum page size limit\n2. The destination system's API constraints\n3. Memory and performance considerations\n","default":20},"dataURITemplate":{"type":"string","description":"Defines a template for generating direct links to records in the source application's UI.\n\nThis field uses handlebars syntax to create dynamic URLs or identifiers based on the exported data.\nThe template is evaluated for each record processed by the export, and the resulting URL is:\n- Stored with error records in the job history database\n- Displayed in the error logs and job monitoring UI\n- Available to downstream steps via the errorContext object\n\nThe template can reference any field in the exported record using the handlebars pattern:\n{{record.fieldName}}\n\nCommon patterns by system type:\n- Salesforce: \"https://my.salesforce.com/lightning/r/Contact/{{record.Id}}/view\"\n- NetSuite: \"https://system.netsuite.com/app/common/entity/custjob.nl?id={{record.internalId}}\"\n- Shopify: \"https://your-store.myshopify.com/admin/customers/{{record.id}}\"\n- Generic APIs: \"{{record.id}}\" or \"{{record.customer_id}}, {{record.email}}\"\n\nThis field is optional but recommended for improved error handling and debugging.\n"},"traceKeyTemplate":{"type":"string","description":"Defines a template for generating unique identifiers for each record processed by this export.\n\nThis field allows you to override the system's default record identification logic by specifying\nexactly which field(s) should be used to uniquely identify each record. The trace key is used to:\n- Track records through the entire integration process\n- Identify duplicate records in the job history\n- Match updated records to previously processed ones\n- Generate unique references in error reporting\n\nThe template uses handlebars syntax and can reference:\n- Single fields: {{record.id}}\n- Combined fields: {{join \"_\" record.customerId record.orderId}}\n- Modified fields: {{lowercase record.email}}\n\nIf a transformation is applied to the exported data before the trace key is evaluated,\nfield references should omit the \"record.\" prefix (e.g., {{id}} instead of {{record.id}}).\n\nIf not specified, the system attempts to identify a unique field in each record automatically,\nbut this may not always select the optimal field for identification.\n\nMaximum length of generated trace keys: 512 characters\n"},"oneToMany":{"$ref":"#/components/schemas/OneToMany"},"pathToMany":{"$ref":"#/components/schemas/PathToMany"},"isLookup":{"type":"boolean","description":"Controls whether this export operates as a lookup resource in integration flows.\n\nWhen set to true, this export's behavior fundamentally changes:\n- It expects and requires input data from a previous flow step\n- It uses input data to dynamically parameterize the export operation\n- The system injects input record fields into API requests via handlebars templates\n- Flow execution waits for this step to complete before proceeding\n- Results are directly passed to subsequent steps\n\nLookup exports are typically used to:\n- Retrieve additional details about records processed earlier in the flow\n- Find matching records in a target system for reference or update operations\n- Enrich data with information from external services\n- Validate data against reference sources\n\nAPI behavior differences when true:\n- Request templating uses both record context and other handlebars variables\n- Export is executed once per input record (or batch, depending on configuration)\n- Rate limiting and concurrency controls apply differently\n\nWhen false (default), the export operates in standard extraction mode, pulling data\nindependently without requiring input from previous flow steps.\n"},"groupByFields":{"$ref":"#/components/schemas/GroupBy"},"delta":{"$ref":"#/components/schemas/Delta"},"test":{"$ref":"#/components/schemas/Test"},"once":{"$ref":"#/components/schemas/Once"},"webhook":{"$ref":"#/components/schemas/Webhook"},"distributed":{"$ref":"#/components/schemas/Distributed"},"filesystem":{"$ref":"#/components/schemas/FileSystem"},"simple":{"type":"object","description":"Configuration for data loader exports that only run in data loader specific flows.\nNote: This field and all its properties are only relevant when the 'type' field is set to 'simple'.\n","properties":{"file":{"$ref":"#/components/schemas/File"}}},"http":{"$ref":"#/components/schemas/Http"},"file":{"$ref":"#/components/schemas/File"},"salesforce":{"$ref":"#/components/schemas/Salesforce"},"as2":{"$ref":"#/components/schemas/AS2"},"dynamodb":{"$ref":"#/components/schemas/DynamoDB"},"ftp":{"$ref":"#/components/schemas/FTP"},"jdbc":{"$ref":"#/components/schemas/JDBC"},"mongodb":{"$ref":"#/components/schemas/MongoDB"},"netsuite":{"$ref":"#/components/schemas/NetSuite"},"rdbms":{"$ref":"#/components/schemas/RDBMS"},"s3":{"$ref":"#/components/schemas/S3"},"wrapper":{"$ref":"#/components/schemas/Wrapper"},"parsers":{"$ref":"#/components/schemas/Parsers"},"filter":{"allOf":[{"description":"Configuration for selectively processing records from an export based on their field values.\nThis object enables precise control over which records continue through the flow.\n\n**Filter behavior**\n\nWhen configured, the filter is applied immediately after records are retrieved from the source system:\n- Records that match the filter criteria continue through the flow\n- Records that don't match are silently dropped\n- No partial record processing is performed\n\n**Available filter fields**\nThe fields available for filtering are the data fields from each record retrieved by the export.\n"},{"$ref":"#/components/schemas/Filter"}]},"inputFilter":{"allOf":[{"description":"Configuration for selectively processing input records in a lookup export.\n\nThis filter is only relevant for exports where `isLookup` is set to `true`, meaning\nthe export is being used as a flow step to retrieve additional data for records\nprocessed in previous steps.\n\n**Input filter behavior**\n\nWhen configured in a lookup export, this filter is applied to the incoming records\nbefore they are used to query the external system:\n- Only input records that match the filter criteria will trigger lookup operations\n- Records that don't match will pass through the step without being enriched\n- This can significantly improve performance by reducing unnecessary API calls\n\n**Use cases**\n\nCommon scenarios for using inputFilter include:\n- Only looking up additional data for records that meet certain criteria\n- Preventing API calls for records that already have the required data\n- Implementing conditional lookup logic based on record properties\n- Reducing API call volume to stay within rate limits\n\n**Available filter fields**\nThe fields available for filtering are the data fields from the input records\npassed to this lookup export from previous flow steps.\n"},{"$ref":"#/components/schemas/Filter"}]},"mappings":{"allOf":[{"description":"Field mapping configurations applied to the input records of a\nlookup export before the lookup HTTP request is made.\n\n**When this field is valid**\n\nCeligo only supports `mappings` on exports that meet **both**\nof the following conditions:\n\n1. `isLookup` is `true` (the export is being used as a\n   lookup step, not a source export), AND\n2. `adaptorType` is `\"HTTPExport\"` (generic REST/SOAP\n   HTTP lookup — not NetSuite, Salesforce, RDBMS, file-based,\n   or any other adaptor).\n\nFor any other combination (source exports, non-HTTP lookup\nexports), do not set this field.\n\n**Behavior when valid**\n\nWhen used on a lookup HTTP export, `mappings` transforms\neach incoming record from the upstream flow step before the\nlookup HTTP call is made:\n\n- Input records are reshaped according to the mapping rules.\n- The transformed record flows into the HTTP request — either\n  directly as the request body, or further shaped by an\n  `http.body` Handlebars template which then renders\n  against the post-mapped record.\n- Useful when the lookup target API expects a request\n  structure that differs from the upstream record shape.\n"},{"$ref":"#/components/schemas/Mappings"}]},"transform":{"allOf":[{"description":"Data transformation configuration for reshaping records during export operations.\n\n**Export-specific behavior**\n\n**Source Exports**: Transforms records retrieved from the external system before they are passed to downstream flow steps.\n\n**Lookup Exports (isLookup: true)**: Transforms the lookup results returned by the external system.\n\n**Critical requirement for lookups**\n\nFor NetSuite and most other API-based lookups, this field is **ESSENTIAL**. Raw lookup results often come in nested or complex formats that differ from what the flow requires. You **MUST** use a transform to:\n1. Flatten nested structures (e.g., `results[0].id` -> `id`)\n2. Map specific fields to the top level\n3. Handle empty results gracefully\n\nNote that transformed results are not automatically merged back into source records - merging is handled separately by the 'response mapping' configuration in your flow definition.\n"},{"$ref":"#/components/schemas/Transform"}]},"hooks":{"type":"object","description":"Defines custom JavaScript hooks that execute at specific points during the export process.\n\nThese hooks allow for programmatic intervention in the data flow, enabling custom transformations,\nvalidations, filtering, and error handling beyond what's possible with standard configuration.\n","properties":{"preSavePage":{"type":"object","description":"Hook that executes after records are retrieved from the source system but before\nthey are sent to downstream flow steps.\n\nThis hook can transform, filter, validate, or enrich each page of data before it\nenters subsequent flow steps. Common uses include flattening nested data structures,\nremoving unwanted records, or adding computed fields.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId.\n"},"_scriptId":{"type":"string","format":"objectId","description":"Reference to a predefined script resource containing hook functions.\n\nThe referenced script should contain the function specified in the\n'function' property.\n"},"_stackId":{"type":"string","format":"objectId","description":"Reference to the stack resource associated with this hook.\n\nUsed when the hook logic is part of a stack deployment.\n"},"configuration":{"type":"object","description":"Custom configuration object passed to the hook function.\n\nThis allows passing static parameters or settings to the hook script, making the script\nreusable across different exports with different configurations.\n"}}}}},"settingsForm":{"$ref":"#/components/schemas/Form"},"settings":{"$ref":"#/components/schemas/Settings"},"mockOutput":{"$ref":"#/components/schemas/MockOutput"},"_ediProfileId":{"type":"string","format":"objectId","description":"Reference to an EDI profile that this export will use for parsing X12 EDI documents.\n\nThis field contains the unique identifier of an EDI profile resource that must exist\nin the system prior to creating or updating the export. For parsing operations, the\nEDI profile provides essential settings such as:\n- Envelope-level specifications for X12 EDI documents (ISA and GS qualifiers)\n- Trading partner identifiers and qualifiers needed for validation\n- Delimiter configurations used to properly parse the document structure\n- Version information to ensure correct segment and element interpretation\n- Validation rules to verify EDI document compliance with standards\n\nIn the export context, an EDI profile is specifically required when:\n- Parsing incoming EDI documents into a structured JSON format\n- Extracting data elements from raw EDI files\n- Validating incoming EDI document structure against trading partner requirements\n- Converting EDI segments and elements into a format usable by downstream flow steps\n\nThe centralized profile approach ensures parsing consistency across all exports\nand prevents scattered configuration of parsing rules across multiple resources.\n\nFormat: 24-character hexadecimal string\n"},"_postParseListenerId":{"type":"string","format":"objectId","description":"Reference to a webhook export that will be automatically invoked after EDI parsing operations.\n\nThis field contains the unique identifier of another export resource (of type \"webhook\")\nthat will be called when an EDI file is processed, regardless of parsing success or failure.\n\n**Invocation behavior**\n\nThe listener is invoked once per file, with the following behaviors:\n\n| Scenario | Behavior |\n| --- | --- |\n| Successfully parsed EDI file | The listener is invoked with the parsed payload, with no error fields present |\n| Unable to parse EDI file | The listener is invoked with the payload and error information in the payload |\n\n**Supported adapter types**\n\nCurrently, this functionality is only supported for:\n- AS2Export (when parsing EDI files)\n- FTPExport (when parsing EDI files)\n\nSupport for additional adapters will be added in future releases.\n\n**Primary use case**\n\nThe primary purpose of this field is to enable automatic sending of functional acknowledgements\n(such as 997 or 999) after receiving EDI documents, whether the parse was successful or not.\nThis allows for immediate feedback to trading partners about document receipt and processing status.\n\nFormat: 24-character hexadecimal string\n"},"preSave":{"$ref":"#/components/schemas/PreSave"},"assistant":{"type":"string","description":"Identifier for the connector assistant used to configure this export."},"assistantMetadata":{"type":"object","additionalProperties":true,"description":"Metadata associated with the connector assistant configuration."},"sampleData":{"type":"string","description":"Sample data payload used for previewing and testing the export."},"sampleHeaders":{"type":"array","description":"Sample HTTP headers used for previewing and testing the export.","items":{"type":"object","properties":{"name":{"type":"string","description":"Header name."},"value":{"type":"string","description":"Header value."}}}},"sampleQueryParams":{"type":"array","description":"Sample query parameters used for previewing and testing the export.","items":{"type":"object","properties":{"name":{"type":"string","description":"Query parameter name."},"value":{"type":"string","description":"Query parameter value."}}}}},"required":["name"]},"OneToMany":{"type":"boolean","description":"Controls whether the resource treats child records within parent records as the primary data units.\n\n**Important: this is not for specifying where records are in an api response**\n\nIf you need to tell an export where to find the array of records in the HTTP response\nbody (e.g. \"the records are at data.items\"), use `http.response.resourcePath` instead.\n`oneToMany` serves a completely different purpose — it operates on records that have\nalready been extracted from the response.\n\n**What oneToMany actually does**\n\nWhen set to true, this field fundamentally changes how record data is processed:\n- The system will \"unwrap\" nested child records from their parent containers\n- Each child record becomes a separate output record for downstream processing\n- The pathToMany field must be set to indicate where these child records are located\n- Parent record fields can still be accessed via a special \"parent\" context\n\nThis is typically used on **lookup exports** (isLookup: true) or **imports** where\nthe incoming records contain nested arrays that need to be fanned out.\n\nCommon scenarios for enabling this option:\n- Processing order line items individually from an order export\n- Handling invoice line items from an invoice export\n- Processing individual transaction lines from journal entries\n- Extracting address records from customer exports\n\nThis setting applies for the duration of the current flow step only and does not affect\nhow data is stored or structured in other flow steps.\n\nIf false (default), the resource processes each top-level record as a single unit.\n","default":false},"PathToMany":{"type":"string","description":"Specifies the JSON path to child records when oneToMany mode is enabled.\n\nThis field is only used when oneToMany is set to true. It defines the exact location\nof child records within the parent record structure using dot notation:\n\n- Simple path: \"items\" for a direct child array field\n- Nested path: \"lines.lineItems\" for a more deeply nested array\n- Multi-level: \"details.items.subitems\" for deeply nested structures\n\nThe system uses this path to:\n- Locate the array of child records within each parent record\n- Extract each array element as a separate record for processing\n- Make both the child record data and parent context available to downstream steps\n\nImportant considerations:\n- The path must point to an array field\n- For row-based data (i.e. where Celigo models this via an array or arrays of objects), this field is not required\n- If the path is invalid or doesn't exist, the resource will report success but process zero records\n- Maximum path depth: 10 levels\n\nThis field must contain a valid JSON path expression using dot notation.\n"},"GroupBy":{"type":"array","description":"Specifies which fields to use for grouping records in the export results. When configured, records with\nthe same values in these fields will be grouped together and treated as a single record by downstream\nsteps in your flow.\n\nFor example:\n- Group sales orders by customer ID to process all orders for each customer together\n- Group journal entries by accounting period to consolidate related transactions\n- Group inventory items by location to process inventory by warehouse\n\nWhen grouping is used, the export's page size determines the maximum number of groups per page, not individual\nrecords. Note that effective grouping typically requires that records with the same group field values appear\ntogether in the export data.\n","items":{"type":"string"}},"Delta":{"type":"object","description":"Configuration object for incremental data exports that retrieve only changed records.\n\nThis object is REQUIRED when the export's type field is set to \"delta\" and should not be\nincluded for other export types. Delta exports are designed for efficient synchronization\nby retrieving only records that have been created or modified since the last execution.\n\n**Default cutoff behavior (NO USER-SUPPLIED CUTOFF)**\nIf the user prompt does not specify a cutoff timestamp, delta exports MUST default to using\nthe platform-managed *last successful run* timestamp. In integrator.io this is exposed to\nHTTP exports and scripts as the `{{lastExportDateTime}}` variable.\n\n- First run: behaves like a full export (no cutoff available yet)\n- Subsequent runs: uses `{{lastExportDateTime}}` as the lower bound (cutoff)\n\nFor optimal AI agent implementation, consider these guidelines:\n\n1. Primary configuration method depends on adapter type:\n    - For HTTP exports: Use {{lastExportDateTime}} variable in relativeURI or body\n    - For specific application adapters: Use dateField to specify timestamp fields\n\n2. The system automatically maintains the last successful run timestamp\n    - No need to store or manage timestamps in your own code\n    - First run fetches all records (equivalent to a standard export)\n    - Subsequent runs use this timestamp as the starting point\n\n3. Error handling and recovery:\n    - If an export fails, the next run uses the last successful timestamp\n    - Records created/modified during a failed run will be included in the next run\n    - The lagOffset field can be used to handle edge cases\n","properties":{"dateField":{"type":"string","description":"Specifies one or more timestamp fields to filter records by modification date.\n\n**Field behavior**\n\nThis field determines which record timestamp(s) are compared against the last successful run time\nto identify changed records. Key characteristics:\n\n- REQUIRED for most adapter types (except HTTP and REST where this field is not supported)\n- Can reference a single field or multiple comma-separated fields\n- Field(s) must exist in the source system and contain valid date/time values\n- When multiple fields are specified, they are processed sequentially\n\n**Implementation patterns**\n\n**Single Field Pattern**\n```\n\"dateField\": \"lastModifiedDate\"\n```\n- Records where lastModifiedDate > last run time are exported\n- Most common pattern, suitable for most applications\n- Works when a single field reliably tracks all changes\n\n**Multiple Field Pattern**\n```\n\"dateField\": \"createdAt,lastModified\"\n```\n- First exports records where createdAt > last run time\n- Then exports records where lastModified > last run time\n- Useful when different operations update different timestamp fields\n- Handles cases where some records only have creation timestamps\n\n**Critical adaptor-specific instruction:**\n- The adaptor type is HTTP. For HTTP exports, the \"dateField\" property MUST NOT be included in the delta configuration.\n- HTTP exports use the {{lastExportDateTime}} variable directly in the relativeURI or body instead of dateField.\n- DO NOT include \"dateField\" in your response. If you include it, the configuration will be invalid.\n\nExample HTTP query with implicit delta:\n```\n\"/api/v1/users?modified_since={{lastExportDateTime}}\"\n\nExample (Business Central) newly created records:\n```\n\"/businesscentral/companies({{record.companyId}})/customers?$filter=systemCreatedAt gt {{lastExportDateTime}}\"\n```\n\nFor Salesforce, this field is required and has the following default values:\n- LastModifiedDate\n- CreatedDate\n- SystemModstamp\n- LastActivityDate\n- LastViewedDate\n- LastReferencedDate\nAlso, any custom fields that are not listed above but are timestamp fields will be added to the default values.\n```\n"},"dateFormat":{"type":"string","description":"Defines the date/time format expected by the source system's API.\n\n**Field behavior**\n\nThis field controls how the system formats the timestamp used for filtering:\n\n- OPTIONAL: Only needed when the source system doesn't support ISO8601\n- Default: ISO8601 format (YYYY-MM-DDTHH:mm:ss.sssZ)\n- Uses Moment.js formatting tokens\n- Directly affects the format of {{lastExportDateTime}} when used in HTTP requests\n\n**Implementation patterns**\n\n**Standard Date Format**\n```\n\"dateFormat\": \"YYYY-MM-DD\"  // 2023-04-15\n```\n- For APIs that accept date-only values\n- Will truncate time portion (potentially creating a wider filter window)\n\n**Custom DateTime Format**\n```\n\"dateFormat\": \"MM/DD/YYYY HH:mm:ss\"  // 04/15/2023 14:30:00\n```\n- For APIs with specific formatting requirements\n- Especially common with older or proprietary systems\n\n**Localized Format**\n```\n\"dateFormat\": \"DD-MMM-YYYY HH:mm:ss\"  // 15-Apr-2023 14:30:00\n```\n- For systems requiring locale-specific representations\n- Often needed for ERP systems or regional applications\n\nLeave this field unset unless the source system explicitly requires a non-ISO8601 format.\n"},"lagOffset":{"type":"integer","description":"Specifies a time buffer (in milliseconds) to account for system data propagation delays.\n\n**Field behavior**\n\nThis field addresses synchronization issues caused by replication or indexing delays:\n\n- OPTIONAL: Only needed for systems with known data visibility delays\n- Value is SUBTRACTED from the last successful run timestamp\n- Creates an overlapping window to catch records that were being processed\n  during the previous export\n- Measured in milliseconds (1000ms = 1 second)\n\n**Implementation pattern**\n\nThe formula for the effective filter date is:\n```\neffectiveFilterDate = lastSuccessfulRunTime - lagOffset\n```\n\n**Common values**\n\n- 15000 (15 seconds): Typical for systems with short indexing delays\n- 60000 (1 minute): Common for systems with moderate replication lag\n- 300000 (5 minutes): For systems with significant processing delays\n\n**Diagnosis**\n\nThis field should be configured when you observe:\n- Records occasionally missing from delta exports\n- Records created/modified near the export run time being skipped\n- Inconsistent results between runs with similar data changes\n\nIMPORTANT: Setting this value too high decreases efficiency by processing\nredundant records. Set only as high as needed to avoid missed records.\n"}}},"Test":{"type":"object","description":"Configuration object for limiting data volume during development and testing.\n\nThis object is REQUIRED when the export's type field is set to \"test\" and should not be\nincluded for other export types. Test exports are designed to safely retrieve small data\nsamples without processing full datasets, making them ideal for development and validation.\n\nFor optimal AI agent implementation, consider these guidelines:\n\n1. Test exports behave identically to standard exports except for the record limit\n    - All filters, pagination, and processing logic remain intact\n    - Only the total output volume is artificially capped\n\n2. Test exports do not store state between runs\n    - Unlike delta exports, each test export starts fresh\n    - No need to reset any state when transitioning from test to production\n\n3. Common implementation scenarios:\n    - During initial integration development\n    - When diagnosing data format or transformation issues\n    - When performance testing with controlled data volumes\n    - For demonstrations or proof-of-concept implementations\n\n4. Transitioning to production:\n    - Simply change type from \"test\" to null/undefined for standard exports\n    - Change type from \"test\" to \"delta\" for incremental exports\n    - No other configuration changes are typically needed\n","properties":{"limit":{"type":"integer","description":"Specifies the maximum number of records to process in a single test export run.\n\n**Field behavior**\n\nThis field controls the data volume during test executions:\n\n- REQUIRED when the export's type field is set to \"test\"\n- Accepts integer values between 1 and 100\n- Enforced by the system regardless of pagination settings\n- Applies to top-level records (before oneToMany processing)\n\n**Implementation considerations**\n\n**Balance between volume and usefulness**\n\nThe ideal limit depends on your testing objectives:\n\n- 1-5 records: Good for initial implementation and format verification\n- 10-25 records: Useful for testing transformation logic and identifying edge cases\n- 50-100 records: Better for performance testing and data pattern analysis\n\n**System enforced maximum**\n\n```\n\"limit\": 100  // Maximum allowed value\n```\n\nThe system enforces a hard limit of 100 records for all test exports to prevent\naccidental processing of large datasets during development.\n\n**Relationship with pageSize**\n\nThe test limit overrides but does not replace the export's pageSize:\n\n- If limit < pageSize: Only one page is processed with limit records\n- If limit > pageSize: Multiple pages are processed until limit is reached\n- Either way, the total records processed will not exceed the limit value\n\nIMPORTANT: When transitioning from test to production, you don't need to remove\nthis configuration - simply change the export's type field to remove the test limit.\n","minimum":1,"maximum":100}}},"Once":{"type":"object","description":"Configuration object for flag-based exports that process records exactly once.\n\nThis object is REQUIRED when the export's type field is set to \"once\" and should not be\nincluded for other export types. Once exports use a boolean/checkbox field in the source system\nto track which records have been processed, creating a reliable idempotent data extraction pattern.\n\nFor optimal AI agent implementation, consider these guidelines:\n\n1. System behavior during execution:\n    - First, the export retrieves all records where the specified boolean field is false\n    - After successfully processing these records, the system automatically sets the field to true\n    - On subsequent runs, previously processed records are excluded\n\n2. Prerequisites in the source system:\n    - The source must have a boolean/checkbox field that can be used as a processing flag\n    - Your connection must have write access to update this field after export\n    - The field should be indexed for optimal performance\n\n3. Common implementation scenarios:\n    - One-time migrations where data should not be duplicated\n    - Processing queues where records are marked as \"processed\"\n    - Compliance scenarios requiring audit trails of exported records\n    - Implementing exactly-once delivery semantics\n\n4. Error handling behavior:\n    - If the export fails, the boolean fields remain unchanged\n    - Records will be retried on the next run\n    - No manual intervention is required for recovery\n","properties":{"booleanField":{"type":"string","description":"Specifies the API field name of the boolean/checkbox that tracks processed records.\n\n**Field behavior**\n\nThis field identifies which boolean field in the source system controls the export filtering:\n\n- REQUIRED when the export's type field is set to \"once\"\n- Must reference a valid boolean/checkbox field in the source system\n- Must be writeable by the connection's authentication credentials\n- The system performs two operations with this field:\n  1. Filters to only include records where this field is false\n  2. Updates processed records by setting this field to true\n\n**Implementation patterns**\n\n**Using dedicated tracking fields**\n```\n\"booleanField\": \"isExported\"\n```\n- Create a dedicated field specifically for integration tracking\n- Provides clear separation between business and integration logic\n- Most maintainable approach for long-term operations\n\n**Using existing status fields**\n```\n\"booleanField\": \"isProcessed\"\n```\n- Leverage existing status fields if they align with your integration needs\n- Ensure the field's meaning is compatible with your integration logic\n- Consider potential conflicts with other processes using the same field\n\n**Targeted export tracking**\n```\n\"booleanField\": \"exported_to_netsuite\"\n```\n- For systems synchronizing to multiple destinations\n- Create separate tracking fields for each destination system\n- Enables independent control of different export processes\n\n**Technical considerations**\n\n- Field updates happen in batches after each successful page of records is processed\n- The field update uses the same connection as the export operation\n- For optimal performance, the boolean field should be indexed in the source database\n- Boolean values of 0/1, true/false, and yes/no are all properly interpreted\n\nIMPORTANT: Ensure the field is not being updated by other processes, as this could\ncause records to be skipped unexpectedly. If multiple processes need to track exports,\nuse separate boolean fields for each process.\n"}}},"Webhook":{"type":"object","description":"Configuration object for real-time event listeners that receive data via incoming HTTP requests.\n\nThis object is REQUIRED when the export's type field is set to \"webhook\" and should not be\nincluded for other export types. Webhook exports create dedicated HTTP endpoints that can receive\ndata from external systems in real-time, enabling event-driven integration architectures.\n\nWhen configured, the system:\n1. Creates a unique URL endpoint for receiving HTTP requests\n2. Validates incoming requests based on your security configuration\n3. Processes the payload and passes it to subsequent flow steps\n4. Returns a configurable HTTP response to the caller\n\nFor optimal AI agent implementation, consider these guidelines:\n\n**Webhook security models**\n\nWebhooks support multiple security verification methods, each requiring different fields:\n\n1. **HMAC Verification** (Most secure, recommended for production)\n    - Required fields: verify=\"hmac\", key, algorithm, encoding, header\n    - Verifies a cryptographic signature included with each request\n    - Ensures data integrity and authenticity\n\n2. **Token Verification** (Simple shared secret)\n    - Required fields: verify=\"token\", token, path\n    - Checks for a specific token value in the request\n    - Simpler but less secure than HMAC\n\n3. **Basic Authentication** (HTTP standard)\n    - Required fields: verify=\"basic\", username, password\n    - Uses HTTP Basic Authentication headers\n    - Compatible with most HTTP clients\n\n4. **Secret URL** (Simplest but least secure)\n    - Required fields: verify=\"secret_url\", token\n    - Relies solely on URL obscurity for security\n    - The token is embedded in the webhook URL to create a unique, hard-to-guess endpoint\n    - Suitable only for non-sensitive data or testing\n\n5. **Public Key** (Advanced, for specific providers)\n    - Required fields: verify=\"publickey\", key\n    - Uses public key cryptography for verification\n    - Only available for certain providers\n\n**Response customization**\n\nYou can customize how the webhook responds to callers with these field groups:\n\n1. **Standard Success Response**\n    - Fields: successStatusCode, successBody, successMediaType, successResponseHeaders\n    - Controls how the webhook responds to valid requests\n\n2. **Challenge Response** (For subscription verification)\n    - Fields: challengeSuccessBody, challengeSuccessStatusCode, challengeSuccessMediaType, challengeResponseHeaders\n    - Controls how the webhook responds to verification/challenge requests\n\n**Implementation scenarios**\n\nWebhooks are commonly used for:\n\n1. **Real-time data synchronization**\n    - E-commerce platforms sending order notifications\n    - CRM systems delivering contact updates\n    - Payment processors reporting transaction events\n\n2. **Event-driven processes**\n    - Triggering fulfillment when orders are placed\n    - Initiating approval workflows on document submissions\n    - Executing business logic when status changes occur\n\n3. **System integration**\n    - Connecting SaaS applications without polling\n    - Building composite applications from microservices\n    - Creating fan-out architectures for event distribution\n","properties":{"provider":{"type":"string","description":"Specifies the source application sending webhook data, enabling platform-specific optimizations.\n\n**Field behavior**\n\nThis field determines how the webhook handles incoming requests:\n\n- OPTIONAL: Defaults to \"custom\" if not specified\n- When a specific provider is selected, the system:\n  1. Pre-configures appropriate security settings for that platform\n  2. Applies platform-specific payload parsing rules\n  3. May enable additional features only relevant to that provider\n\n**Implementation guidance**\n\n**Provider-specific configurations**\n\nWhen you know the exact source system, select its specific provider:\n```\n\"provider\": \"shopify\"\n```\n- Automatically configures proper HMAC verification settings\n- Optimizes payload parsing for Shopify's webhook format\n- May enable additional Shopify-specific features\n\n**Custom configuration**\n\nFor generic webhooks or unlisted providers, use custom:\n```\n\"provider\": \"custom\"\n```\n- Requires manual configuration of all security settings\n- Maximum flexibility for handling any webhook format\n- Recommended for custom applications or newer platforms\n\n**Selection criteria**\n\nChoose a specific provider when:\n- The source system is explicitly listed in the enum values\n- You want to leverage pre-configured settings\n- The integration must follow platform-specific practices\n\nChoose \"custom\" when:\n- The source system is not listed\n- You need full control over webhook configuration\n- You're building a custom interface or protocol\n\nIMPORTANT: Some providers enforce specific security methods. When selecting a\nprovider, ensure you have the necessary security credentials (tokens, keys, etc.)\nas required by that platform.\n","enum":["github","shopify","travis","travis-org","slack","dropbox","onfleet","helpscout","errorception","box","stripe","aha","jira","pagerduty","postmark","mailchimp","intercom","activecampaign","segment","recurly","shipwire","surveymonkey","parseur","mailparser-io","hubspot","integrator-extension","custom","sapariba","happyreturns","typeform"]},"verify":{"type":"string","description":"Defines the security verification method used to authenticate incoming webhook requests.\n\n**Field behavior**\n\nThis field is the primary control for webhook security:\n\n- REQUIRED for all webhook exports\n- Determines which additional security fields must be configured\n- Controls how incoming requests are validated before processing\n\n**Verification methods**\n\n**Hmac Verification**\n```\n\"verify\": \"hmac\"\n```\n- Most secure method, cryptographically verifies request integrity\n- REQUIRES: key, algorithm, encoding, header fields\n- Validates a cryptographic signature included in the request header\n- Works well with providers that support HMAC (Shopify, Stripe, GitHub, etc.)\n\n**Token Verification**\n```\n\"verify\": \"token\"\n```\n- Simple verification using a shared secret token\n- REQUIRES: token, path fields\n- Checks for a specific token value in the request body or query params\n- Good for simple scenarios with trusted networks\n\n**Basic Authentication**\n```\n\"verify\": \"basic\"\n```\n- Standard HTTP Basic Authentication\n- REQUIRES: username, password fields\n- Validates credentials sent in the Authorization header\n- Compatible with most HTTP clients and tools\n\n**Public Key**\n```\n\"verify\": \"publickey\"\n```\n- Advanced verification using public key cryptography\n- REQUIRES: key field (containing the public key)\n- Only available for certain providers that use asymmetric cryptography\n- Highest security level but more complex to configure\n\n**Secret url**\n```\n\"verify\": \"secret_url\"\n```\n- Simplest method, relies solely on the obscurity of the URL\n- REQUIRES: token field (the token is embedded in the webhook URL to create a unique, hard-to-guess endpoint)\n- Only suitable for non-sensitive data or testing environments\n- Not recommended for production use with sensitive data\n\nIMPORTANT: Choose the security method that matches your source system's capabilities.\nIf the source system supports multiple verification methods, HMAC is generally the\nmost secure option.\n","enum":["token","hmac","basic","publickey","secret_url"]},"token":{"type":"string","description":"Specifies the shared secret token value used to verify incoming webhook requests.\n\n**Field behavior**\n\nThis field defines the expected token value:\n\n- REQUIRED when verify=\"token\" or verify=\"secret_url\"\n- When verify=\"token\": must be a string that exactly matches what the sender will provide. Used with the path field to locate and validate the token in the request.\n- When verify=\"secret_url\": the token is embedded in the webhook URL to create a unique, hard-to-guess endpoint. Generate a random, high-entropy value.\n- Case-sensitive and whitespace-sensitive\n\n**Implementation guidance**\n\nThe token verification flow works as follows:\n1. The webhook receives an incoming request\n2. The system looks for the token at the location specified by the path field\n3. If the found value exactly matches this token value, the request is processed\n4. If no match is found, the request is rejected with a 401 error\n\n**Security best practices**\n\nFor maximum security:\n- Use a random, high-entropy token (32+ characters)\n- Include a mix of uppercase, lowercase, numbers, and special characters\n- Don't use predictable values like company names or common words\n- Rotate tokens periodically for sensitive integrations\n\n**Common implementations**\n\n```\n\"token\": \"3a7c4f8b2e9d1a5c6b3e7d9f2a1c5b8e\"\n```\n\n```\n\"token\": \"whsec_8fb2e91a5c6b3e7d9f2a1c5b8e3a7c4f\"\n```\n\nIMPORTANT: Never share this token in public repositories or documentation.\nTreat it as a sensitive credential similar to a password.\n"},"algorithm":{"type":"string","description":"Specifies the cryptographic hashing algorithm used for HMAC signature verification.\n\n**Field behavior**\n\nThis field determines how signatures are validated:\n\n- REQUIRED when verify=\"hmac\"\n- Must match the algorithm used by the webhook sender\n- Affects security strength and compatibility\n\n**Algorithm selection**\n\n**Sha-256 (Recommended)**\n```\n\"algorithm\": \"sha256\"\n```\n- Modern, secure hash algorithm\n- Industry standard for most new webhook implementations\n- Preferred choice for all new integrations\n- Used by Shopify, Stripe, and many modern platforms\n\n**Sha-1 (Legacy)**\n```\n\"algorithm\": \"sha1\"\n```\n- Older, less secure algorithm\n- Still used by some legacy systems\n- Only select if the provider explicitly requires it\n- GitHub webhooks used this historically\n\n**Sha-384/sha-512 (High Security)**\n```\n\"algorithm\": \"sha384\"\n\"algorithm\": \"sha512\"\n```\n- Higher security variants with longer digests\n- Use when specified by the provider or for sensitive data\n- Less common but supported by some security-focused systems\n\nIMPORTANT: This MUST match the algorithm used by the webhook sender.\nMismatched algorithms will cause all webhook requests to be rejected.\n","enum":["sha1","sha256","sha384","sha512"]},"encoding":{"type":"string","description":"Specifies the encoding format used for the HMAC signature in webhook requests.\n\n**Field behavior**\n\nThis field determines how signature values are encoded:\n\n- REQUIRED when verify=\"hmac\"\n- Must match the encoding used by the webhook sender\n- Affects how binary signature values are represented as strings\n\n**Encoding options**\n\n**Hexadecimal (hex)**\n```\n\"encoding\": \"hex\"\n```\n- Represents the signature as a string of hexadecimal characters (0-9, a-f)\n- Most common encoding for web-based systems\n- Used by many platforms including Stripe and some Shopify implementations\n- Example output: \"8f7d56a32e1c9b47d882e3aa91341f64\"\n\n**Base64**\n```\n\"encoding\": \"base64\"\n```\n- Represents the signature using base64 encoding\n- More compact than hex (about 33% shorter)\n- Used by platforms like Shopify (newer implementations) and some GitHub scenarios\n- Example output: \"j31WozbhtrHYeC46qRNB9k==\"\n\nIMPORTANT: This MUST match the encoding used by the webhook sender.\nMismatched encoding will cause all webhook requests to be rejected even if\nthe signature is mathematically correct.\n","enum":["hex","base64"]},"key":{"type":"string","description":"Specifies the secret key used to verify cryptographic signatures in incoming webhooks.\n\n**Field behavior**\n\nThis field provides the shared secret for signature verification:\n\n- REQUIRED when verify=\"hmac\" or verify=\"publickey\"\n- Contains the secret value known to both sender and receiver\n- Used with the incoming payload to validate the signature\n- Highly sensitive security credential\n\n**Implementation guidance**\n\n**For hmac verification**\n\nThe key is used in the following verification process:\n1. The webhook receives an incoming request with a signature\n2. The system computes an HMAC of the request body using this key and the specified algorithm\n3. This computed signature is compared with the signature from the request header\n4. If they match exactly, the request is authenticated and processed\n\n**Security best practices**\n\nFor maximum security:\n- Use a random, high-entropy key (32+ characters)\n- Include a mix of characters and avoid dictionary words\n- Never share this key in code repositories or logs\n- Rotate keys periodically for sensitive integrations\n- Use environment variables or secure credential storage\n\n**Common implementations**\n\n```\n\"key\": \"whsec_3a7c4f8b2e9d1a5c6b3e7d9f2a1c5b8e3a7c4f8b\"\n```\n\n```\n\"key\": \"sk_test_51LZIr9B9Y6YIwSKx8647589JKhdjs889KJsk389\"\n```\n\nIMPORTANT: This key should be treated as a highly sensitive credential,\nsimilar to a private key or password. It should never be exposed publicly\nor logged in application logs.\n"},"header":{"type":"string","description":"Specifies the HTTP header name that contains the signature for HMAC verification.\n\n**Field behavior**\n\nThis field identifies where to find the signature in incoming requests:\n\n- REQUIRED when verify=\"hmac\"\n- Must exactly match the header name used by the webhook sender\n- Case-insensitive (HTTP headers are not case-sensitive)\n\n**Common header patterns**\n\n**Platform-specific headers**\n\nMany platforms use standardized header names for their signatures:\n\n```\n\"header\": \"X-Shopify-Hmac-SHA256\"  // For Shopify webhooks\n```\n\n```\n\"header\": \"X-Hub-Signature-256\"    // For GitHub webhooks\n```\n\n```\n\"header\": \"Stripe-Signature\"        // For Stripe webhooks\n```\n\n**Generic signature headers**\n\nFor custom implementations or less common platforms:\n\n```\n\"header\": \"X-Webhook-Signature\"     // Common generic format\n```\n\n```\n\"header\": \"X-Signature\"             // Simplified format\n```\n\n**Implementation notes**\n\n- The system will look for this exact header name in incoming requests\n- If the header is not found, the request will be rejected with a 401 error\n- Some platforms may include a prefix in the header value (e.g., \"sha256=\")\n  which is handled automatically by the system\n\nIMPORTANT: This must exactly match the header name used by the webhook sender.\nIf you're unsure about the correct header name, consult the sender's documentation\nor use a tool like cURL with verbose output to inspect an example request.\n"},"path":{"type":"string","description":"Specifies the location of the verification token in incoming webhook requests.\n\n**Field behavior**\n\nThis field determines where to find the token for verification:\n\n- REQUIRED when verify=\"token\"\n- Defines a JSON path to locate the token in the request body\n- For query parameters, use the appropriate path format (typically at root level)\n\n**Implementation patterns**\n\n**Token in request body**\n\nFor tokens embedded in JSON payloads:\n\n```\n\"path\": \"meta.token\"        // For { \"meta\": { \"token\": \"xyz123\" } }\n```\n\n```\n\"path\": \"verification.key\"  // For { \"verification\": { \"key\": \"xyz123\" } }\n```\n\n**Token at root level**\n\nFor tokens in the top level of the request:\n\n```\n\"path\": \"token\"             // For { \"token\": \"xyz123\", \"data\": {...} }\n```\n\n**Token in query parameters**\n\nFor tokens sent as URL query parameters, use the parameter name:\n\n```\n\"path\": \"verify_token\"      // For /webhook?verify_token=xyz123\n```\n\n**Verification process**\n\n1. The webhook receives an incoming request\n2. The system uses this path to extract the token value\n3. The extracted value is compared with the configured token\n4. If they match exactly, the request is processed\n\nIMPORTANT: The path is case-sensitive and must exactly match the structure\nof incoming requests. For query parameters, the system automatically checks\nboth the body and query string using the provided path.\n"},"username":{"type":"string","description":"Specifies the username for webhook HTTP Basic Authentication security.\n\n**Field behavior**\n\nThis field defines one half of the Basic Authentication credentials:\n\n- REQUIRED when verify=\"basic\"\n- Used in conjunction with the password field\n- Case-sensitive string value\n- Encoded in the standard HTTP Basic Authentication format\n\n**Implementation notes**\n\nWhen Basic Authentication is used, the webhook requires incoming requests to include\nan Authorization header containing \"Basic \" followed by a base64-encoded string of\n\"username:password\".\n\nExample header:\n```\nAuthorization: Basic d2ViaG9va191c2VyOndlYmhvb2tfcGFzc3dvcmQ=\n```\n\nWhere \"d2ViaG9va191c2VyOndlYmhvb2tfcGFzc3dvcmQ=\" is the base64 encoding of\n\"webhook_user:webhook_password\".\n\n**Security considerations**\n\nBasic Authentication:\n- Is widely supported by HTTP clients and servers\n- Should ONLY be used over HTTPS to prevent credential interception\n- Provides a simple authentication mechanism but without integrity verification\n- Is less secure than HMAC verification for webhook scenarios\n\nIMPORTANT: Always use strong, unique credentials rather than generic or easily\nguessable values. Basic Authentication is less secure than HMAC for webhooks\nbut can be appropriate for simple scenarios or when working with systems that\ndon't support more advanced verification methods.\n"},"password":{"type":"string","description":"Specifies the password for webhook HTTP Basic Authentication security.\n\n**Field behavior**\n\nThis field defines the second half of the Basic Authentication credentials:\n\n- REQUIRED when verify=\"basic\"\n- Used in conjunction with the username field\n- Case-sensitive string value\n- Encoded in the standard HTTP Basic Authentication format\n\n**Implementation notes**\n\nThis password is combined with the username and encoded in base64 format for\nthe HTTP Authorization header. The webhook verifies that incoming requests contain\nthe correct encoded credentials before processing them.\n\n**Security best practices**\n\nFor maximum security:\n- Use a strong, randomly generated password (16+ characters)\n- Include a mix of uppercase, lowercase, numbers, and special characters\n- Don't reuse passwords from other systems\n- Avoid dictionary words or predictable patterns\n- Rotate passwords periodically for sensitive integrations\n\nIMPORTANT: This password should be treated as a sensitive credential.\nNever share it in public repositories, documentation, or logs. Always use\nHTTPS for webhooks using Basic Authentication to prevent credential interception.\n"},"successStatusCode":{"type":"integer","description":"Specifies the HTTP status code sent back to webhook callers after successful processing.\n\n**Field behavior**\n\nThis field controls the HTTP response status code:\n\n- OPTIONAL: Defaults to 204 (No Content) if not specified\n- Affects how webhook callers interpret the success response\n- Must be a valid HTTP status code in the 2xx range\n\n**Common status codes**\n\n**204 No Content (Default)**\n```\n\"successStatusCode\": 204\n```\n- Returns no response body\n- Most efficient option as it minimizes response size\n- Appropriate when the caller doesn't need confirmation details\n- Automatically disables successBody (even if specified)\n\n**200 ok**\n```\n\"successStatusCode\": 200\n```\n- Standard success response\n- Allows returning a response body with details\n- Most widely used and recognized success code\n- Compatible with all HTTP clients\n\n**202 Accepted**\n```\n\"successStatusCode\": 202\n```\n- Indicates request was accepted for processing but may not be complete\n- Appropriate for asynchronous processing scenarios\n- Signals that the webhook was received but full processing is pending\n\n**Implementation considerations**\n\nThe appropriate status code depends on your webhook caller's expectations:\n\n- Some systems require specific status codes to consider the delivery successful\n- If the caller retries on anything other than 2xx, use 200 or 202\n- If the caller needs confirmation details, use 200 with a response body\n- If efficiency is paramount, use 204 (default)\n\nIMPORTANT: When using 204 No Content, any successBody configuration will be ignored\nas this status code specifically indicates no response body is being returned.\n","default":204},"successBody":{"type":"string","description":"Specifies the HTTP response body sent back to webhook callers after successful processing.\n\n**Field behavior**\n\nThis field controls the content returned to the webhook caller:\n\n- OPTIONAL: Defaults to empty (no body) if not specified\n- Ignored when successStatusCode is 204 (No Content)\n- Content type is determined by the successMediaType field\n- Can contain static text or structured data (JSON, XML)\n\n**Implementation patterns**\n\n**Simple acknowledgment**\n```\n\"successBody\": \"OK\"\n```\n- Minimal plaintext response\n- Confirms receipt without details\n- Most efficient for basic acknowledgment\n\n**Structured response (JSON)**\n```\n\"successBody\": \"{\\\"success\\\":true,\\\"message\\\":\\\"Webhook received\\\"}\"\n```\n- Provides structured data about the result\n- Can include more detailed status information\n- Compatible with programmatic processing by the caller\n- Remember to escape quotes in JSON strings\n\n**Custom confirmation**\n```\n\"successBody\": \"{\\\"status\\\":\\\"received\\\",\\\"timestamp\\\":\\\"{{currentDateTime}}\\\"}\"\n```\n- Can include dynamic values using handlebars templates\n- Useful for providing receipt confirmation with metadata\n\n**Response flow**\n\nThe response body is sent after the webhook payload has been:\n1. Received and authenticated\n2. Validated against any configured requirements\n3. Accepted for processing by the system\n\nIMPORTANT: The successBody will only be returned if successStatusCode is NOT 204.\nIf you want to return a body, make sure to set successStatusCode to 200, 201, or 202.\n"},"successMediaType":{"type":"string","description":"Specifies the Content-Type header for successful webhook responses.\n\n**Field behavior**\n\nThis field controls how the response body is interpreted:\n\n- OPTIONAL: Defaults to \"json\" if not specified\n- Only relevant when returning a successBody and not using status code 204\n- Determines the Content-Type header in the HTTP response\n- Must be consistent with the actual format of the successBody\n\n**Media type options**\n\n**Json (Default)**\n```\n\"successMediaType\": \"json\"\n```\n- Sets Content-Type: application/json\n- Use when successBody contains valid JSON\n- Most common for API responses\n- Allows structured data that clients can parse programmatically\n\n**Xml**\n```\n\"successMediaType\": \"xml\"\n```\n- Sets Content-Type: application/xml\n- Use when successBody contains valid XML\n- Necessary for systems expecting XML responses\n- Less common in modern APIs but still used in some enterprise systems\n\n**Plain Text**\n```\n\"successMediaType\": \"plaintext\"\n```\n- Sets Content-Type: text/plain\n- Use for simple string responses\n- Most compatible option for basic acknowledgments\n- Appropriate when successBody is just \"OK\" or similar\n\n**Implementation considerations**\n\n- The media type must match the actual content format in successBody\n- If returning JSON in successBody, use \"json\" (most common)\n- If returning a simple text acknowledgment, use \"plaintext\"\n- If the caller specifically requires XML, use \"xml\"\n\nIMPORTANT: When successStatusCode is 204 (No Content), this field has no effect\nsince no body is returned, and therefore no Content-Type is needed.\n","default":"json","enum":["json","xml","plaintext"]},"successResponseHeaders":{"type":"array","description":"Defines custom HTTP headers to include in successful webhook responses.\n\n**Field behavior**\n\nThis field allows additional HTTP headers in the response:\n\n- OPTIONAL: If omitted, only standard headers are included\n- Each entry defines a name/value pair for a single header\n- Applied to all successful responses (regardless of status code)\n- Can override standard headers like Content-Type\n\n**Implementation patterns**\n\n**Standard use cases**\n\nCustom headers are useful for:\n- Providing metadata about the response\n- Enabling CORS for browser-based webhook callers\n- Including tracking or correlation IDs\n- Adding custom security headers\n\n**Common header examples**\n\nCORS support:\n```json\n[\n  {\"name\": \"Access-Control-Allow-Origin\", \"value\": \"*\"},\n  {\"name\": \"Access-Control-Allow-Methods\", \"value\": \"POST, OPTIONS\"}\n]\n```\n\nRequest tracking:\n```json\n[\n  {\"name\": \"X-Request-ID\", \"value\": \"{{jobId}}\"},\n  {\"name\": \"X-Webhook-Received\", \"value\": \"{{currentDateTime}}\"}\n]\n```\n\nCustom application headers:\n```json\n[\n  {\"name\": \"X-API-Version\", \"value\": \"1.0\"},\n  {\"name\": \"X-Processing-Status\", \"value\": \"accepted\"}\n]\n```\n\n**Technical details**\n\n- Header names are case-insensitive as per HTTP specification\n- Some headers like Content-Type can be set via other fields (successMediaType)\n- Headers defined here take precedence over automatically set headers\n- The values can contain handlebars expressions for dynamic content\n\nIMPORTANT: Be careful when setting security-related headers like\nAccess-Control-Allow-Origin, as improper values could create security vulnerabilities.\n","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}}},"challengeResponseHeaders":{"type":"array","description":"Defines custom HTTP headers to include in webhook challenge responses.\n\n**Field behavior**\n\nThis field configures headers for subscription verification:\n\n- OPTIONAL: If omitted, only standard headers are included\n- Only used for webhook verification/challenge requests\n- Each entry defines a name/value pair for a single header\n- Particularly important for platforms requiring specific verification headers\n\n**Challenge verification context**\n\nMany webhook providers implement a verification process:\n1. Before sending real events, they send a \"challenge\" request\n2. The webhook must respond with specific headers and/or body content\n3. Only after successful verification will real webhook events be sent\n\nThis field allows customizing the headers sent during this verification step.\n\n**Common patterns by platform**\n\n**Facebook/Instagram**\n```json\n[\n  {\"name\": \"Content-Type\", \"value\": \"text/plain\"}\n]\n```\n\n**Slack**\n```json\n[\n  {\"name\": \"Content-Type\", \"value\": \"application/json\"}\n]\n```\n\n**Custom implementations**\n```json\n[\n  {\"name\": \"X-Challenge-Response\", \"value\": \"passed\"},\n  {\"name\": \"X-Verification-Status\", \"value\": \"success\"}\n]\n```\n\nIMPORTANT: The specific headers required vary by platform. Consult the webhook\nprovider's documentation for the exact verification requirements. Incorrect challenge\nresponse headers may prevent successful webhook subscription.\n","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}}},"challengeSuccessBody":{"type":"string","description":"Specifies the HTTP response body for webhook challenge/verification requests.\n\n**Field behavior**\n\nThis field defines the verification response content:\n\n- OPTIONAL: If omitted, a default empty response is sent\n- Only used for webhook subscription verification requests\n- Content type is determined by the challengeSuccessMediaType field\n- Often needs to contain specific values expected by the webhook provider\n\n**Verification patterns by platform**\n\nDifferent webhook providers implement different verification mechanisms:\n\n**Facebook/Instagram**\n                \"challengeSuccessBody\": \"{{hub.challenge}}\"\n```\n- Must echo back the challenge value sent in the request\n- Uses handlebars expression to access the challenge parameter\n\n**Slack**\n```\n\"challengeSuccessBody\": \"{\\\"challenge\\\":\\\"{{challenge}}\\\"}\"\n```\n- Returns the challenge value in a JSON structure\n- Required for Slack's Events API verification\n\n**Generic challenge-response**\n```\n\"challengeSuccessBody\": \"{\\\"verified\\\":true,\\\"timestamp\\\":\\\"{{currentDateTime}}\\\"}\"\n```\n- Simple confirmation response for custom implementations\n- Can include additional metadata as needed\n\n**Implementation considerations**\n\n- The exact format is dictated by the webhook provider's requirements\n- Some platforms require echoing back specific request parameters\n- Others require a structured response with specific fields\n- Handlebars expressions ({{variable}}) can access request data\n\nIMPORTANT: Incorrect challenge responses will prevent webhook subscription verification.\nAlways consult the webhook provider's documentation for exact requirements.\n"},"challengeSuccessStatusCode":{"type":"integer","description":"Specifies the HTTP status code for webhook challenge/verification responses.\n\n**Field behavior**\n\nThis field controls the verification response status:\n\n- OPTIONAL: Defaults to 200 (OK) if not specified\n- Only used for webhook subscription verification requests\n- Must match what the webhook provider expects for successful verification\n- Most platforms require a 200 OK response\n\n**Common status codes for verification**\n\n**200 ok (Default)**\n```\n\"challengeSuccessStatusCode\": 200\n```\n- Standard success response\n- Most webhook platforms expect this status code\n- Generally the safest option for verification\n\n**201 Created**\n```\n\"challengeSuccessStatusCode\": 201\n```\n- Used by some systems to indicate subscription was created\n- Less common for verification but used in some custom implementations\n\n**Platform-specific requirements**\n\nMost major webhook providers require specific status codes:\n\n- Facebook/Instagram: 200\n- Slack: 200\n- GitHub: 200\n- Shopify: 200\n- Stripe: 200\n\nIMPORTANT: Using the wrong status code will cause the verification to fail.\nIf you're unsure, keep the default 200 status code, as it's the most widely\naccepted for webhook verifications.\n","default":200},"challengeSuccessMediaType":{"type":"string","description":"Specifies the Content-Type header for webhook challenge/verification responses.\n\n**Field behavior**\n\nThis field controls the challenge response format:\n\n- OPTIONAL: Defaults to \"json\" if not specified\n- Only used for webhook subscription verification requests\n- Determines the Content-Type header in the verification response\n- Must match the format of the challengeSuccessBody content\n\n**Common media types for verification**\n\n**Json (Default)**\n```\n\"challengeSuccessMediaType\": \"json\"\n```\n- Sets Content-Type: application/json\n- Required by Slack and many modern webhook providers\n- Use when returning structured verification data\n\n**Plain Text**\n```\n\"challengeSuccessMediaType\": \"plaintext\"\n```\n- Sets Content-Type: text/plain\n- Required by Facebook/Instagram webhook verification\n- Use when the challenge response is a simple string\n\n**Xml**\n```\n\"challengeSuccessMediaType\": \"xml\"\n```\n- Sets Content-Type: application/xml\n- Less common but used by some enterprise systems\n- Use only when the webhook provider specifically requires XML\n\n**Platform-specific requirements**\n\n- Facebook/Instagram: plaintext (when echoing hub.challenge)\n- Slack: json (for Events API verification)\n- Most modern APIs: json\n\nIMPORTANT: The media type must match both the format of your challengeSuccessBody\nand the requirements of the webhook provider. Mismatched content types can cause\nverification to fail even if the response body is correct.\n","default":"json","enum":["json","xml","plaintext"]}}},"Distributed":{"type":"object","description":"Configuration object for distributed exports that require authentication.\n\nThis object contains authentication credentials needed for distributed processing.\n","properties":{"bearerToken":{"type":"string","description":"Bearer token for authenticating distributed export requests.\n\n**Field behavior**\n\nThis token provides authentication for the distributed export:\n\n- Required for secure access to distributed endpoints\n- Must be a valid bearer token format\n- Used in Authorization header as \"Bearer {token}\"\n- Should be kept secure and rotated regularly\n\n**Implementation guidance**\n\n**Token management**\n\n- Store tokens securely (encrypted at rest)\n- Implement token rotation policies\n- Monitor token expiration dates\n- Use environment variables for token storage\n\n**Security considerations**\n\n- Never log bearer tokens in plain text\n- Implement proper access controls\n- Use HTTPS for all token transmissions\n- Validate tokens on each request\n\nIMPORTANT: Bearer tokens provide full access to the distributed export.\nTreat them as sensitive credentials.\n","format":"password"}},"required":["bearerToken"],"additionalProperties":false},"FileSystem":{"type":"object","description":"Configuration for FileSystem exports","properties":{"directoryPath":{"type":"string","description":"Directory path to retrieve files from (required)"}},"required":["directoryPath"]},"File":{"type":"object","description":"Configuration for file processing in exports. This object defines how files are parsed,\nfiltered, and processed across all file-based export operations within Celigo.\n\n**Export contexts**\n\nThis schema applies to multiple file-based export scenarios:\n\n1. **Source System Types**:\n    - Simple exports with file uploads through the UI\n    - HTTP exports retrieving files from web sources\n    - FTP/SFTP exports downloading files from servers\n    - Amazon S3\n    - Azure Blob Storage\n    - Google Cloud Storage\n    - And other file-based source systems\n\n**Implementation guidelines**\n\nAI agents should consider these key decision points when configuring file processing for exports:\n\n1. **File Format Selection**: Set the `type` field to match the format of the files being processed\n    (csv, json, xlsx, xml). This determines which format-specific configuration object to populate.\n\n2. **Processing Mode**: Set the `output` field based on whether you need to:\n    - Parse file contents into records (`\"records\"`)\n    - Transfer files without parsing (`\"blobKeys\"`)\n    - Only retrieve metadata about files (`\"metadata\"`)\n\n3. **File Filtering**: Use the `filter` object to selectively process files based on criteria\n    like file names, sizes, or custom logic.\n\n4. **Format-Specific Configuration**: Configure the corresponding object (csv, json, xlsx, xml)\n    based on the selected file type.\n\n**Field dependencies**\n\n- When `type` = \"csv\", configure the `csv` object\n- When `type` = \"json\", configure the `json` object\n- When `type` = \"xlsx\", configure the `xlsx` object\n- When `type` = \"xml\", configure the `xml` object\n- When `type` = \"filedefinition\", configure the `fileDefinition` object\n\n**Export-specific considerations**\n\nWhile the file processing configuration remains consistent, different export types may have\nadditional requirements:\n\n- **HTTP Exports**: May need authentication and specific endpoint configurations\n- **FTP/SFTP Exports**: Require server credentials and path information\n- **Cloud Storage Exports**: Need bucket/container details and access credentials\n\nThe File schema focuses specifically on how files are processed once they are\nretrieved from the source system, regardless of which export type is used.\n","properties":{"encoding":{"type":"string","description":"Character encoding used for reading and parsing file content. This setting is critical for ensuring proper character interpretation, especially for international data and special characters.\n\n**Encoding options and usage guidance**\n\n**Utf-8 (`\"utf8\"`)**\n- **Default Setting**: Used if no encoding is specified\n- **Best For**: Modern text files, international character sets, XML/JSON files\n- **Compatibility**: Universally supported; standard for web applications\n- **When to Use**: First choice for most new integrations; handles most languages\n\n**Windows-1252 (`\"win1252\"`)**\n- **Best For**: Legacy Windows system files, older Western European data\n- **Compatibility**: Common in Windows-based exports, especially older systems\n- **When to Use**: When files originate from older Windows systems or contain certain special characters not rendering properly in utf8\n\n**Utf-16le (`\"utf-16le\"`)**\n- **Best For**: Unicode text with extensive character requirements\n- **Compatibility**: Microsoft Word documents, some database exports\n- **When to Use**: When files have Byte Order Mark (BOM) or are known to be 16-bit Unicode\n\n**Gb18030 (`\"gb18030\"`)**\n- **Best For**: Chinese character sets\n- **Compatibility**: Official character set standard for China\n- **When to Use**: For files containing simplified or traditional Chinese characters\n\n**Mac Roman (`\"macroman\"`)**\n- **Best For**: Legacy Mac system files (pre-OS X)\n- **Compatibility**: Older Apple systems and applications\n- **When to Use**: For older files created on Apple systems\n\n**Iso-8859-1 (`\"iso88591\"`)**\n- **Best For**: Western European languages\n- **Compatibility**: Widely supported in older systems\n- **When to Use**: For legacy European language content\n\n**Shift jis (`\"shiftjis\"`)**\n- **Best For**: Japanese character sets\n- **Compatibility**: Common in Japanese Windows and older systems\n- **When to Use**: For files containing Japanese text\n\n**Implementation guidance for ai agents**\n\n1. **Detection Strategy**: If encoding is unknown, first try utf8 (default), then try win1252 for Western language files with errors\n\n2. **Encoding Selection Process**:\n    - Check source system documentation for encoding specifications\n    - For files with corrupt/missing characters, test alternative encodings\n    - Consider geographic origin of data (Asian languages often require specific encodings)\n\n3. **Common Issues to Watch For**:\n    - Mojibake (garbled text): Indicates wrong encoding selection\n    - Question marks or boxes: Character conversion failures\n    - BOM markers appearing as visible characters: Consider utf-16le\n","enum":["utf8","win1252","utf-16le","gb18030","macroman","iso88591","shiftjis"]},"type":{"type":"string","description":"Defines the format of the files being processed. REQUIRED for all file-based exports except blob exports (export type \"blob\" or file output \"blobKeys\").\n\nThis field creates a critical dependency that determines which format-specific configuration object must be populated.\n\n**Format options and requirements**\n\n**Csv Files (`\"csv\"`)**\n- **Use For**: Tabular data with delimiter-separated values\n- **Required Config**: The `csv` object with settings like delimiters and header options\n- **Best For**: Simple tabular data, exports from spreadsheets, flat data structures\n- **Example Sources**: Exported reports, data extracts, simple database dumps\n\n**Json Files (`\"json\"`)**\n- **Use For**: Hierarchical data in JavaScript Object Notation\n- **Required Config**: The `json` object, especially the `resourcePath` to locate records\n- **Best For**: Nested data structures, API responses, complex object representations\n- **Example Sources**: REST APIs, document databases, configuration files\n\n**Excel Files (`\"xlsx\"`)**\n- **Use For**: Microsoft Excel spreadsheets\n- **Required Config**: The `xlsx` object with Excel-specific settings\n- **Best For**: Business reports, formatted tabular data, multi-sheet documents\n- **Example Sources**: Financial reports, manually created spreadsheets\n\n**Xml Files (`\"xml\"`)**\n- **Use For**: Extensible Markup Language documents\n- **Required Config**: The `xml` object, critically the `resourcePath` using XPath\n- **Best For**: Document-oriented data, SOAP responses, EDI formats\n- **Example Sources**: SOAP APIs, legacy system exports, industry standard formats\n\n**File Definition (`\"filedefinition\"`)**\n- **Use For**: Complex proprietary formats requiring custom parsing logic\n- **Required Config**: The `fileDefinition` object with the _fileDefinitionId\n- **Best For**: Legacy formats, fixed-width files, complex multi-record formats\n- **Example Sources**: Mainframe exports, proprietary formats, EDI documents\n\n**Implementation guidance**\n\n1. Determine the file format from the source system or documentation\n2. Select the matching type from the enum values\n3. Configure ONLY the corresponding format-specific object\n4. Other format-specific objects will be ignored\n\nFor AI agents: This field creates a critical dependency chain - selecting a type\ncommits you to using the corresponding configuration object.\n","enum":["csv","json","xlsx","xml","filedefinition"]},"output":{"type":"string","description":"Defines the fundamental processing mode for file data. This critical field determines how files are handled and what data is passed to subsequent flow steps.\n\n**Processing modes**\n\n**Content Processing (`\"records\"`)**\n- **Behavior**: Files are parsed into structured records based on their format\n- **Use When**: You need to access and manipulate the data inside files\n- **Output**: Array of record objects reflecting the file's content\n- **Example Flow**: CSV files → Parse into records → Transform → Import to target system\n- **Best For**: Data synchronization, ETL processes, content-based workflows\n- **Technical Impact**: Requires format-specific parsing; higher processing overhead\n\n**File Transfer (`\"blobKeys\"`)**\n- **Behavior**: Files are treated as binary objects and transferred without parsing\n- **Use When**: You need to move files between systems without modifying content\n- **Output**: References to the binary file objects (blobKeys)\n- **Example Flow**: Image files → Transfer as blobs → Upload to cloud storage\n- **Best For**: Binary files, images, documents, any non-textual content\n- **Technical Impact**: Lower processing overhead; maintains file integrity\n\n**File Discovery (`\"metadata\"`)**\n- **Behavior**: Only file metadata is retrieved (name, size, dates) without content\n- **Use When**: You need to inventory files before deciding which to process\n- **Output**: Array of file metadata objects\n- **Example Flow**: Scan FTP folder → Get metadata → Filter by date → Process selected files\n- **Best For**: File inventory, selective processing, large directory scanning\n- **Technical Impact**: Minimal processing overhead; fastest operation mode\n\n**Implementation guidance**\n\nThis setting profoundly affects flow architecture:\n\n1. For data integration: Use `\"records\"` to work with the file contents\n2. For file movement: Use `\"blobKeys\"` to preserve binary integrity\n3. For file discovery: Use `\"metadata\"` as a first step before selective processing\n\nAI agents should select this value based on whether the integration needs to\nwork with the file's content or just move/manage the files themselves.\n","enum":["records","metadata","blobKeys"]},"skipDelete":{"type":"boolean","description":"Controls whether source files are retained or deleted after successful processing. This setting has significant implications for data lifecycle management and system storage.\n\n**Behavior**\n\n- **When true**: Source files remain on the file server after processing\n- **When false** (default): Source files are automatically deleted after successful processing\n- **Error Handling**: Files are only deleted after SUCCESSFUL processing; failed files remain intact\n\n**Decision factors for ai agents**\n\nConsider recommending `skipDelete: true` when:\n\n1. **Compliance Requirements**:\n    - Regulatory frameworks require source file retention (GDPR, HIPAA, SOX)\n    - Audit trails need to maintain original file evidence\n    - Data retention policies mandate preserving source files\n\n2. **Operational Needs**:\n    - Files need to be processed by multiple different flows\n    - Source files serve as disaster recovery backups\n    - Re-processing might be required (for testing or validation)\n    - Source systems do not maintain their own copy of the files\n\nConsider recommending `skipDelete: false` (default) when:\n\n1. **Storage Optimization**:\n    - Working with large files that would consume significant storage\n    - High volume of files processed frequently\n    - Files are already backed up elsewhere\n    - Storage costs are a concern\n\n2. **Security Considerations**:\n    - Files contain sensitive data that should be minimized\n    - \"Clean workspace\" policies are in place\n    - Source files represent a potential security liability\n\n**Implementation guidance**\n\n- **Storage Planning**: When `skipDelete: true`, ensure sufficient storage is available for file accumulation\n- **File Organization**: Consider implementing an archiving strategy for retained files\n- **Monitoring**: Set up space monitoring when retaining files to prevent storage exhaustion\n- **Cleanup Automation**: If files must be retained but eventually deleted, consider a separate cleanup job\n\n**Integration patterns**\n\n- **Multi-stage Processing**: Set to `true` for files that need multi-step processing in separate flows\n- **Extract-Transform-Archive**: Set to `true` when original files need archiving after extraction\n- **Single-use Import**: Set to `false` for one-time imports where originals have no further value\n\n**Technical considerations**\n\nThis setting only affects the source file server. Records extracted from the files and processed through the flow are not affected by this setting - they continue through your integration regardless of this value.\n"},"compressionFormat":{"type":"string","description":"Specifies the compression format of the files being processed. This setting enables the system to automatically decompress files before parsing their contents.\n\n**Compression options**\n\n**Gzip (`\"gzip\"`)**\n- **File Extension**: Typically .gz, .gzip\n- **Characteristics**: Single-file compression, maintains original file name in metadata\n- **Compression Ratio**: Moderate to high, depends on file type (5-75% size reduction)\n- **Common Sources**: Linux/Unix systems, database exports, API response payloads\n- **Use Cases**: Individual file transfers, API response handling, log files\n\n**Zip (`\"zip\"`)**\n- **File Extension**: .zip\n- **Characteristics**: Archive format that can contain multiple files/directories\n- **Compression Ratio**: Moderate (usually 30-60% size reduction)\n- **Common Sources**: Windows systems, manual exports, email attachments\n- **Use Cases**: Multi-file packages, email attachments, mixed-format content\n\n**Implementation guidance for ai agents**\n\n**When to Configure Compression**\n\n1. **Source System Behavior**:\n    - Set when the source system always delivers compressed files\n    - Leave blank when files are delivered uncompressed\n    - NEVER set when files are sometimes compressed, sometimes not (this will cause errors)\n\n2. **Selection Criteria**:\n    - Examine file extensions (.zip, .gz) in the source system\n    - Check source system documentation for compression specifications\n    - Consider typical OS of the source (.zip for Windows, .gz for Unix/Linux)\n\n3. **Multi-file Considerations**:\n    - For .zip files containing multiple files, all files will be processed individually\n    - For nested compression (e.g., .gz files inside .zip), only the outer compression is handled\n\n**Technical Implementation Notes**\n\n- **Processing Flow**: Files are decompressed before any other processing (parsing, filtering)\n- **Filename Handling**: After decompression, the contained file name(s) are used for subsequent operations\n- **Error Conditions**: If a file is marked as compressed but isn't actually compressed, processing will fail\n- **Mixed Formats**: If files might arrive in different compression formats, use separate exports for each format\n\n**Performance Considerations**\n\n- **Network Efficiency**: Compressed files reduce transfer time from source to integration platform\n- **Processing Overhead**: Decompression adds slight processing time but saves network time\n- **Storage Impact**: Compressed files use less storage in transit but are decompressed for processing\n\nLeave this field blank if files are not compressed. Setting an incorrect compression format will cause processing errors.\n","enum":["gzip","zip"]},"purgeInternalBackup":{"type":"boolean","description":"Controls whether Celigo's internal backup system retains copies of processed files. This setting affects data retention, recovery capabilities, and compliance posture.\n\n**Behavior**\n\n- **When false** (default): Celigo maintains copies of all processed files for the duration of your account's retention policy\n- **When true**: Celigo will NOT keep internal backup copies of files processed by this specific export\n- **Scope**: This setting applies only to this specific export configuration; other exports are unaffected\n\n**Decision factors for ai agents**\n\n**Scenarios to Recommend `purgeInternalBackup: true`**\n\n1. **Data Sensitivity Requirements**:\n    - Files contain highly sensitive information (PII, PHI, financial, etc.)\n    - Data residency/sovereignty requirements prohibit additional copies\n    - Zero-retention policies mandate immediate deletion after processing\n    - Compliance frameworks require minimizing data copies (GDPR, HIPAA)\n\n2. **Technical Considerations**:\n    - Very large files where storage costs are significant\n    - Files that are already reliably backed up in source systems\n    - Files with very short-lived relevance (e.g., temporary processing files)\n    - Processing of non-production/test data that doesn't require retention\n\n**Scenarios to Recommend `purgeInternalBackup: false` (Default)**\n\n1. **Recovery Requirements**:\n    - Files represent critical business data with recovery needs\n    - Source systems don't maintain reliable backups\n    - Reprocessing capabilities are needed for disaster recovery\n    - Audit trails require evidence of processed files\n\n2. **Operational Benefits**:\n    - Troubleshooting integration issues requires access to source files\n    - Files might need reprocessing in case of downstream errors\n    - Historical analysis or validation may be required\n    - Protection against source system data loss\n\n**Implementation guidance**\n\n**Governance Considerations**\n\n- **Data Lifecycle**: Setting to `true` permanently removes files from Celigo after processing\n- **Recovery Impact**: Without backups, recovery from certain errors may require re-obtaining files from source systems\n- **Audit Trail**: Consider if processed files need to be available for future audits or investigations\n\n**Best Practices**\n\n- **Document Decision**: When setting to `true`, document the rationale for disabling backups\n- **Retention Alignment**: Ensure this setting aligns with overall data retention policies\n- **Risk Assessment**: Evaluate recovery needs against data minimization requirements\n- **Consistency**: Apply consistent backup settings across similar data types\n\n**System Impact**\n\nThis setting does NOT affect:\n- The processing of files during integration runs\n- Source files on their original servers (see `skipDelete` for that)\n- Storage of processed data records in the target system\n\nIt ONLY controls whether Celigo maintains internal copies of the original files.\n"},"decrypt":{"type":"string","description":"Specifies the decryption method to apply to incoming files before processing. This setting enables handling of encrypted files that require decryption before their contents can be parsed.\n\n**Supported encryption**\n\n**Pgp/gpg Encryption (`\"pgp\"`)**\n- **File Extensions**: Typically .pgp, .gpg, or .asc\n- **Encryption Standard**: OpenPGP (RFC 4880)\n- **Key Requirements**: Private key must be configured on the connection\n- **Common Sources**: Secure file transfers, encrypted backups, confidential data exchanges\n\n**Implementation requirements**\n\n1. **Connection Configuration Prerequisites**:\n    - This field assumes the connection has already been configured with appropriate cryptographic settings\n    - Private key must be uploaded to the connection configuration\n    - Passphrase (if applicable) must be configured on the connection\n    - For asymmetric encryption, the corresponding public key must have been used to encrypt the files\n\n2. **File Processing Flow**:\n    - Encrypted files are first retrieved from the source\n    - Decryption is applied using the configured connection's cryptographic settings\n    - After successful decryption, normal file processing continues (parsing, filtering, etc.)\n    - If decryption fails, the file processing will error out completely\n\n**Guidance for ai agents**\n\n**When to Configure Decryption**\n\n1. **Security Requirements**:\n    - Set to \"pgp\" when source files are PGP/GPG encrypted\n    - Required for end-to-end encrypted data transfers\n    - Common in financial, healthcare, and other industries with sensitive data\n    - Essential for compliance with certain data protection regulations\n\n2. **Technical Indicators**:\n    - File extensions indicate encryption (.pgp, .gpg, .asc)\n    - Source system documentation mentions PGP encryption\n    - Files cannot be opened with standard text editors\n    - Source system provides a public key for encryption\n\n**Implementation Considerations**\n\n- **Key Management**: Ensure private keys are securely stored and properly configured\n- **Error Handling**: Decryption failures will cause the entire file processing to fail\n- **Performance Impact**: Decryption adds processing overhead before file parsing begins\n- **Debugging Challenges**: Encrypted files cannot be easily examined for troubleshooting\n\n**Security Best Practices**\n\n- **Key Rotation**: Recommend periodic key rotation according to security policies\n- **Passphrase Protection**: Use strong passphrases for private keys when possible\n- **Access Control**: Limit access to connections with decryption capabilities\n- **Audit Logging**: Enable detailed logging for decryption operations when available\n\n**Integration with other settings**\n\n- If files are both encrypted AND compressed, decryption happens before decompression\n- Subsequent processing (based on file type settings) occurs after decryption\n- Internal backups (controlled by purgeInternalBackup) store the decrypted files unless configured otherwise\n\nCurrently, only PGP/GPG encryption is supported. For other encryption methods, custom preprocessing may be required.\n","enum":["pgp"]},"batchSize":{"type":"integer","description":"Controls the number of files processed in a single batch operation. This setting allows fine-tuning of performance and resource utilization during file processing.\n\n**Behavior and purpose**\n\n- **Function**: Limits the number of files processed in a single batch request\n- **Default**: If not specified, the system uses a default batch size based on file type\n- **Maximum**: 1000 files per batch (hard system limit)\n- **Impact**: Affects performance, memory usage, and error resilience, but NOT total processing capacity\n\n**Performance optimization guidance**\n\n**Large File Optimization (Set Lower Values: 10-50)**\n\nWhen working with large files (>10MB each), smaller batch sizes are recommended:\n\n- **Network Benefits**: Reduces timeout risks during file transfer\n- **Memory Usage**: Prevents excessive memory consumption\n- **Error Isolation**: Limits the impact of processing failures\n- **Example Scenarios**: Document processing, image files, complex spreadsheets\n\n```\n\"batchSize\": 20  // Good setting for large PDF or image files\n```\n\n**Small File Optimization (Set Higher Values: 100-1000)**\n\nWhen working with small files (<1MB each), larger batch sizes improve efficiency:\n\n- **Throughput**: Processes more files with less overhead\n- **API Efficiency**: Reduces the number of API calls\n- **Resource Utilization**: Maximizes processing efficiency\n- **Example Scenarios**: Small CSV files, transaction records, simple data files\n\n```\n\"batchSize\": 500  // Efficient for small data files\n```\n\n**Implementation guidance for ai agents**\n\n**Recommendation Framework**\n\n1. **File Size Assessment**:\n    - For files averaging >10MB: Recommend 10-20\n    - For files averaging 1-10MB: Recommend 20-100\n    - For files averaging <1MB: Recommend 100-500\n    - For very small files (<100KB): Consider maximum (1000)\n\n2. **Reliability Factors**:\n    - For critical data with no retry capability: Recommend lower values\n    - For unstable network connections: Recommend lower values\n    - For production environments: Start conservative (lower) and increase based on performance\n    - For development/testing: Can use higher values for efficiency\n\n3. **System Constraints**:\n    - Consider available memory in the integration environment\n    - Evaluate network bandwidth and stability\n    - Account for source system rate limits or concurrent connection limits\n\n**Technical considerations**\n\n- **Error Handling**: If a batch fails, only that batch is retried (not individual files)\n- **Parallelism**: Batch size affects concurrent processing but within system limits\n- **Monitoring**: Larger batch sizes make monitoring individual file progress more difficult\n- **Resource Scaling**: Higher batch sizes require more memory but can complete faster\n\n**Relationship to other settings**\n\n- This setting controls file retrieval batching, not record processing batch size\n- Works in conjunction with compression and decryption settings\n- Separate from and complementary to the main flow's pageSize setting\n\nConsider starting with more conservative (lower) values and increasing based on performance monitoring.\n","maximum":1000},"sortByFields":{"type":"array","description":"Allows you to sort all records in a file before processing them. This configuration enables deterministic ordering of records, which can be critical for maintaining data consistency and enabling specific processing patterns.\n\n**Functionality overview**\n\n- **Purpose**: Establishes a specific processing order for records within files\n- **Timing**: Sorting is applied after file parsing but before any filtering or grouping\n- **Scope**: Affects only the in-memory representation of records (doesn't modify source files)\n- **Performance**: Has computational cost proportional to number of records × log(number of records)\n\n**Strategic uses for ai agents**\n\n**Business Process Optimization**\n\n1. **Chronological Processing**:\n    - Sort by date/timestamp fields to process events in time order\n    - Essential for financial transactions, audit logs, event sequences\n    - Example: `[{\"field\": \"transactionDate\", \"descending\": false}]`\n\n2. **Hierarchical Data Handling**:\n    - Sort by parent records before children\n    - Ensures referential integrity in relational data\n    - Example: `[{\"field\": \"parentId\", \"descending\": false}, {\"field\": \"lineNumber\", \"descending\": false}]`\n\n3. **Priority-Based Processing**:\n    - Sort by importance/priority fields to handle critical items first\n    - Useful for SLA-driven processes, tiered operations\n    - Example: `[{\"field\": \"priority\", \"descending\": true}, {\"field\": \"createdDate\", \"descending\": false}]`\n\n**Technical Optimization**\n\n1. **Grouping Efficiency**:\n    - Sorting by the same fields used in groupByFields improves grouping performance\n    - Reduces memory usage when processing large files\n    - Example: `[{\"field\": \"customerId\", \"descending\": false}]` with corresponding groupByFields\n\n2. **Lookup Optimization**:\n    - Sorting by reference fields enhances performance of subsequent lookups\n    - Minimizes database calls by enabling batch lookups\n    - Example: `[{\"field\": \"productSku\", \"descending\": false}]`\n\n3. **Error Reduction**:\n    - Sorting can ensure dependencies are processed in correct order\n    - Reduces failures from out-of-sequence processing\n    - Example: `[{\"field\": \"sequenceNumber\", \"descending\": false}]`\n\n**Implementation guidance**\n\n**Field Selection Considerations**\n\n- **Data Type Compatibility**: Fields must contain comparable values (dates, numbers, strings)\n- **Nulls Handling**: Null values are typically sorted last (after all non-null values)\n- **Nested Fields**: Use dot notation for accessing nested properties (`customer.region`)\n- **Performance Impact**: Each additional sort field increases computational cost\n\n**Common Implementation Patterns**\n\n```json\n// Simple single-field ascending sort (most common)\n[\n  {\"field\": \"orderDate\", \"descending\": false}\n]\n\n// Multi-field sort with primary and secondary criteria\n[\n  {\"field\": \"region\", \"descending\": false},\n  {\"field\": \"revenue\", \"descending\": true}\n]\n\n// Descending priority sort with tie-breaker\n[\n  {\"field\": \"priority\", \"descending\": true},\n  {\"field\": \"createdDate\", \"descending\": false}\n]\n```\n\n**Limitations and Constraints**\n\n- Sorting large datasets has memory implications; consider record volume\n- Maximum recommended number of sort fields: 3-5 (performance considerations)\n- Sorting effectiveness depends on data consistency in source files\n- Complex sorting logic might be better implemented in custom scripts\n","items":{"type":"object","properties":{"field":{"type":"string","description":"Specifies the record field to use as a sort key. This field name identifies which property of each record will be used for comparison when establishing processing order.\n\n**Field selection guidelines**\n\n**Data Type Considerations**\n\n- **Date/Time Fields**: Provide chronological sorting (`createdDate`, `timestamp`)\n- **Numeric Fields**: Enable quantitative ordering (`amount`, `sequenceNumber`, `priority`)\n- **String Fields**: Sort alphabetically (`name`, `status`, `category`)\n- **Boolean Fields**: Group records by true/false values (`isActive`, `isProcessed`)\n\n**Accessing Field Paths**\n\n- **Top-level Properties**: Direct field names (`orderNumber`, `date`)\n- **Nested Objects**: Use dot notation (`customer.name`, `address.country`)\n- **Array Elements**: Not directly supported in basic sorting; use preprocessing\n\n**Common Field Patterns by Domain**\n\n1. **Order Processing**:\n    - `orderDate`, `orderNumber`, `customerId`, `lineNumber`\n\n2. **Financial Data**:\n    - `transactionDate`, `accountNumber`, `amount`, `documentNumber`\n\n3. **Customer Records**:\n    - `lastName`, `firstName`, `customerType`, `region`\n\n4. **Inventory/Products**:\n    - `productCategory`, `itemNumber`, `stockLevel`, `reorderDate`\n\n5. **Event Logs**:\n    - `timestamp`, `severity`, `eventType`, `sourceSystem`\n\n**Implementation notes**\n\n- Field names are case-sensitive\n- Fields must exist in all records (or have consistent representation when missing)\n- Non-existent fields or null values are typically sorted last\n- Maximum recommended field name length: 64 characters\n"},"descending":{"type":"boolean","description":"Controls the sort direction for the specified field. This setting determines whether records will be arranged in ascending (lowest to highest) or descending (highest to lowest) order.\n\n**Behavior**\n\n- **When false or omitted**: Sorts in ascending order (A→Z, 0→9, oldest→newest)\n- **When true**: Sorts in descending order (Z→A, 9→0, newest→oldest)\n\n**Strategic direction selection**\n\n**Ascending Order (descending: false)**\n\nRecommended for:\n- Chronological event processing (earliest first)\n- Sequential operations with dependencies\n- Reference data that builds on previous records\n- Incremental ID or sequence numbers\n\nExample use cases:\n- Processing dated transactions in chronological order\n- Handling items in order of creation\n- Incrementally building state that depends on previous records\n\n**Descending Order (descending: true)**\n\nRecommended for:\n- Priority-based processing (highest first)\n- Recent-first temporal processing\n- Most significant items first\n- Limited processing where only top N items matter\n\nExample use cases:\n- Processing high-priority items before low-priority\n- Handling most recent updates first\n- Focusing on highest-value transactions first\n\n**Implementation patterns**\n\n**Single Field Direction**\n\n```json\n{\"field\": \"createdDate\", \"descending\": false}  // Oldest first\n{\"field\": \"createdDate\", \"descending\": true}   // Newest first\n```\n\n**Mixed Directions in Multi-field Sorts**\n\n```json\n// Group by category (A→Z) but show highest priority first in each category\n[\n  {\"field\": \"category\", \"descending\": false},\n  {\"field\": \"priority\", \"descending\": true}\n]\n```\n\n**Technical considerations**\n\n- Default value is `false` if omitted (ascending sort)\n- For date fields, ascending means oldest first\n- For numeric fields, ascending means smallest first\n- For string fields, ascending means alphabetical order\n"}}}},"groupByFields":{"$ref":"#/components/schemas/GroupBy"},"csv":{"type":"object","description":"Configuration settings for parsing CSV (Comma-Separated Values) files. This object defines how the system interprets delimited text files, handling variations in format, structure, and content.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"csv\". This configuration is required for properly parsing:\n- Standard CSV files (.csv)\n- Tab-delimited files (.tsv, .tab)\n- Other character-delimited files (semicolon, pipe, etc.)\n- Fixed-width text files converted to delimited format\n\n**Implementation strategy for ai agents**\n\n1. **Format Analysis**:\n    - Examine sample files to identify delimiter pattern\n    - Check for presence/absence of header row\n    - Look for whitespace or quote pattern inconsistencies\n    - Identify any rows that should be skipped (headers, metadata, etc.)\n\n2. **Configuration Priority**:\n    - `columnDelimiter`: Most critical setting; incorrect delimiter causes parsing failures\n    - `hasHeaderRow`: Affects field mapping and identification\n    - `rowDelimiter`: Usually auto-detected but important for non-standard files\n    - `trimSpaces`: Important for inconsistent formatting\n    - `rowsToSkip`: Necessary when files contain metadata/comments before data\n\n3. **Common File Source Patterns**:\n\n    | Source System | Typical Delimiter | Header Row | Common Issues |\n    |--------------|-------------------|------------|---------------|\n    | Excel (US)   | Comma (,)         | Yes        | Quoted fields with embedded commas |\n    | Excel (EU)   | Semicolon (;)     | Yes        | Decimal separator conflicts |\n    | Legacy Systems | Pipe (\\|) or Tab | Varies     | Inconsistent field counts |\n    | POS Systems  | Comma or Tab      | Often No   | Trailing delimiters |\n    | ERP Exports  | Varies widely     | Usually Yes| Fixed field counts with padding |\n\n**Error prevention**\n\n- **Misaligned Columns**: Usually caused by incorrect delimiter or quotes handling\n- **Truncated Data**: Can result from wrong row delimiter settings\n- **Field Misinterpretation**: Often caused by incorrect header row setting\n- **Character Encoding Issues**: Address with the parent `encoding` setting\n- **Whitespace Problems**: Resolve with `trimSpaces` setting\n\n**Optimization opportunities**\n\n- For maximum parsing speed, set only the minimal required settings\n- For problematic files with inconsistent formatting, use more restrictive settings\n- Balance between permissive parsing (more data accepted) and strict validation (cleaner data)\n","properties":{"columnDelimiter":{"type":"string","description":"Specifies the character sequence that separates individual fields (columns) within each row of the CSV file.\n\n**Behavior**\n\n- Controls how the parser identifies individual fields in each row\n- Default value: comma (,) if not specified\n- Special characters may need to be escaped\n\n**Common delimiter patterns**\n\n**Standard csv (`,`)**\n```\n\"columnDelimiter\": \",\"\n```\n- Most common format in US/UK systems\n- Default for most spreadsheet exports\n- Used by: Microsoft Excel (US), Google Sheets, many database exports\n\n**European csv (`;`)**\n```\n\"columnDelimiter\": \";\"\n```\n- Common in European locales where comma is the decimal separator\n- Standard format in many EU countries\n- Used by: Microsoft Excel (many EU locales), European business systems\n\n**Tab-Delimited (`\\t`)**\n```\n\"columnDelimiter\": \"\\t\"\n```\n- Used for tab-separated values (TSV) files\n- Better for data containing commas\n- Used by: Database exports, scientific data, legacy systems\n\n**Other Common Delimiters**\n- Pipe: `\"columnDelimiter\": \"|\"` (used in mainframes, legacy systems)\n- Colon: `\"columnDelimiter\": \":\"` (less common, specialized formats)\n- Space: `\"columnDelimiter\": \" \"` (uncommon, problematic with text fields)\n\n**Determination strategy for ai agents**\n\n1. **File Extension Check**:\n    - .csv → Usually comma (,)\n    - .tsv → Always tab (\\t)\n    - .txt → Could be any delimiter; needs inspection\n\n2. **Source System Analysis**:\n    - EU-based systems often use semicolon (;)\n    - Legacy/mainframe systems often use pipe (|)\n    - Scientific/statistical data often uses tab (\\t)\n\n3. **File Content Inspection**:\n    - Open file in text editor to identify separating character\n    - Check for character frequency patterns\n    - Look for consistent character between data elements\n\n4. **System Documentation**:\n    - Check export settings in source system\n    - Review file specifications if available\n\n**Implementation notes**\n\n- For tab delimiter, use `\"\\t\"` (escape sequence for tab)\n- If file contains the delimiter within text fields, ensure proper quoting\n- Multi-character delimiters are supported but rare\n- Setting the wrong delimiter is the most common parsing error\n"},"rowDelimiter":{"type":"string","description":"Specifies the character sequence that indicates the end of each record (row) in the CSV file.\n\n**Behavior**\n\n- Controls how the parser identifies the boundaries between records\n- Default: Auto-detect (system attempts to determine from file content)\n- Common values: newline (`\\n`), carriage return + newline (`\\r\\n`)\n\n**Common row delimiter patterns**\n\n**Windows-Style (`\\r\\n`)**\n```\n\"rowDelimiter\": \"\\r\\n\"\n```\n- CRLF (Carriage Return + Line Feed) sequence\n- Standard for files created on Windows systems\n- Used by: Microsoft Office, Windows-based applications\n\n**Unix-Style (`\\n`)**\n```\n\"rowDelimiter\": \"\\n\"\n```\n- LF (Line Feed) character only\n- Standard for files created on Unix/Linux/macOS (modern) systems\n- Used by: Linux applications, macOS applications, web exports\n\n**Classic Mac-Style (`\\r`)**\n```\n\"rowDelimiter\": \"\\r\"\n```\n- CR (Carriage Return) character only\n- Legacy format used by older Mac systems (pre-OSX)\n- Rare in modern files but still found in some legacy systems\n\n**When to specify explicitly**\n\nIn most cases, the auto-detection works well, but explicitly set this when:\n\n1. **Mixed Line Endings**: Files containing inconsistent line ending styles\n2. **Custom Record Separators**: Files using unconventional record delimiters\n3. **Parsing Errors**: When auto-detection fails to correctly separate records\n4. **Performance Optimization**: To avoid detection overhead in high-volume processing\n\n**Determination strategy for ai agents**\n\n1. **Source System Analysis**:\n    - Windows systems typically use `\\r\\n`\n    - Unix/Linux/macOS typically use `\\n`\n    - Web downloads could use either format\n\n2. **Troubleshooting Guidance**:\n    - If records are merged or split incorrectly, check for proper row delimiter\n    - If file opens correctly in text editor but parsing fails, row delimiter may be the issue\n    - For files with unusual record counts, examine row delimiter setting\n\n**Implementation notes**\n\n- Use escape sequences (`\\n`, `\\r\\n`, `\\r`) to represent control characters\n- Setting incorrect row delimiter may result in merged records or split records\n- When in doubt, leave unspecified to use auto-detection\n- Multi-character delimiters beyond standard line endings are supported but rare\n"},"hasHeaderRow":{"type":"boolean","description":"Indicates whether the CSV file contains a header row with field names as the first row.\n\n**Behavior**\n\n- **When true** (default): First row is treated as field names, not data\n- **When false**: All rows including the first are treated as data records\n- Impacts field mapping, validation, and record counting\n\n**Implementation impact**\n\n**With Header Row (true)**\n\n- Field names from the header row can be referenced in mappings\n- Record count excludes the header row\n- First row of data is the second physical row in the file\n- Provides self-documenting data structure\n\n**Without Header Row (false)**\n\n- Fields are referenced by position/index (e.g., Column1, Column2)\n- Record count includes all rows in the file\n- First row of data is the first physical row in the file\n- Requires external schema or position-based mapping\n\n**Determination strategy for ai agents**\n\n1. **Visual Inspection**:\n    - Check if the first row contains descriptive labels rather than actual data\n    - Look for data type consistency (headers are typically text, while data may be mixed)\n    - Headers often use camelCase, PascalCase, or snake_case formatting\n\n2. **Source System Analysis**:\n    - Most business systems include headers by default\n    - Legacy/mainframe systems may omit headers\n    - Data extracts intended for human use typically include headers\n\n3. **Content Patterns**:\n    - Headers typically don't match the pattern of subsequent data rows\n    - Headers often contain special characters not found in data (spaces, symbols)\n    - Data rows typically have consistent patterns while headers may differ\n\n**Common configurations by source**\n\n| Source Type | Typical Setting | Notes |\n|-------------|-----------------|-------|\n| Business Reports | true | Headers provide field context |\n| Database Exports | true | Column names as headers |\n| Legacy System Feeds | false | Often position-based fixed formats |\n| IoT/Sensor Data | false | Often compact, headerless formats |\n| Manual Data Entry | true | Helps maintain field alignment |\n\n**Best practices**\n\n- Always explicitly set this value rather than relying on the default\n- For data without headers, consider adding them in preprocessing if possible\n- When headers exist but should be ignored, use `hasHeaderRow: true` and `rowsToSkip: 1`\n- Document field positions when working with headerless files\n"},"trimSpaces":{"type":"boolean","description":"Controls whether leading and trailing whitespace should be removed from field values during parsing.\n\n**Behavior**\n\n- **When true**: Removes all leading and trailing whitespace from each field value\n- **When false** (default): Preserves all whitespace in field values exactly as in the source\n- Applies to data fields only; header row values are always trimmed regardless of this setting\n\n**Implementation impact**\n\n**With Trimming Enabled (true)**\n\n- More consistent data for comparison and matching operations\n- Prevents issues with invisible whitespace affecting equality checks\n- Reduces storage space for text-heavy datasets\n- Helps normalize data from inconsistent sources\n\n**With Trimming Disabled (false)**\n\n- Preserves exact data as represented in the source file\n- Required when whitespace is semantically meaningful\n- Maintains original field lengths exactly\n- Necessary for certain data validation scenarios\n\n**Usage guidance for ai agents**\n\n**Recommend `trimSpaces: true` when**\n\n1. **Data Consistency Issues**:\n    - Source systems are known to have inconsistent spacing\n    - Data will be used for matching or comparison operations\n    - Files are generated by multiple different systems\n    - Human-entered data is present (prone to spacing errors)\n\n2. **Data Type Considerations**:\n    - Fields contain numeric values (where spaces are not meaningful)\n    - Fields contain codes, IDs, or reference values\n    - Fields will be used in lookups or joins\n    - Normalization is more important than exact representation\n\n**Recommend `trimSpaces: false` when**\n\n1. **Data Fidelity Requirements**:\n    - Working with fixed-width fields where spaces matter\n    - Dealing with formatted data where spacing is semantic\n    - Legal or compliance scenarios requiring exact preservation\n    - Scientific data where precision of representation matters\n\n2. **Content Characteristics**:\n    - Working with text fields where leading/trailing spaces could be intentional\n    - Processing creative content, addresses, or formatted text\n    - Source system uses space padding for alignment purposes\n\n**Implementation notes**\n\n- This setting affects all fields consistently (cannot be applied to select fields)\n- Only affects leading and trailing spaces, not spaces between words\n- Has no effect on empty fields (empty remains empty)\n- For selective trimming, use transformation rules after parsing\n"},"rowsToSkip":{"type":"integer","description":"Specifies the number of rows at the beginning of the file to ignore before starting data processing.\n\n**Behavior**\n\n- Skips the specified number of rows from the beginning of the file\n- These rows are completely ignored and not processed as data\n- The header row (if present) is counted after the skipped rows\n- Default value is 0 (no rows skipped)\n\n**Implementation impact**\n\n**Common Use Cases**\n\n1. **Metadata Headers**:\n    - Skip report titles, generated timestamps, system information\n    - Skip explanatory text at the beginning of files\n    - Skip company letterhead or report identification rows\n\n2. **Multi-Header Files**:\n    - Skip category headers or section titles\n    - Skip nested headers or hierarchy information\n    - Skip column grouping indicators\n\n3. **Technical Requirements**:\n    - Skip binary file markers or encoding identifiers\n    - Skip non-data content like instructions or disclaimers\n    - Skip inconsistent early rows before standardized data begins\n\n**Calculation guidance for ai agents**\n\nWhen determining the correct value for `rowsToSkip`:\n\n1. **Count from Zero**:\n    - Row 1 = 0, Row 2 = 1, Row 3 = 2, etc.\n\n2. **For Files with Headers**:\n    - Set rowsToSkip = (first data row position - 1) - (hasHeaderRow ? 1 : 0)\n    - Example: If data starts on row 5, and file has a header row:\n      rowsToSkip = (5 - 1) - 1 = 3\n\n3. **For Files without Headers**:\n    - Set rowsToSkip = (first data row position - 1)\n    - Example: If data starts on row 3, and file has no header row:\n      rowsToSkip = (3 - 1) = 2\n\n**Determination strategy**\n\n1. **Visual Inspection**:\n    - Open file in text editor and count non-data rows at the top\n    - Identify the first row containing actual data values\n    - Note if a header row exists separately from skipped content\n\n2. **Common Patterns by Source**:\n    - ERP Reports: Often 2-5 rows of report metadata\n    - Exported Spreadsheets: May have title rows, date stamps\n    - Database Extracts: Usually minimal (0-1) skipped rows\n    - Legacy Systems: May have control records or job information\n\n**Implementation notes**\n\n- Setting too high skips valid data; setting too low includes non-data as records\n- When in doubt, visually inspect the file to confirm correct skip count\n- Remember that header row (if hasHeaderRow=true) is counted AFTER skipped rows\n- Maximum recommended value: 100 (larger values may indicate format misunderstanding)\n"},"disableQuoteAndStripEnclosingQuotes":{"type":"boolean","description":"Controls the handling of quoted fields in CSV files, specifically how the parser manages quotation marks around field values.\n\n**Behavior**\n\n- **When false** (default): Standard CSV quoting rules are applied\n    - Quotation marks around fields protect embedded delimiters\n    - Parser intelligently handles escaped quotes within quoted fields\n    - Follows RFC 4180 CSV specifications for quote handling\n\n- **When true**: Quote detection and processing is disabled\n    - All quotes are treated as literal characters, not field delimiters\n    - Any quotes surrounding field values are removed\n    - Embedded delimiters in quoted fields will cause field splitting\n\n**Implementation impact**\n\n**Standard Quote Handling (false)**\n\nExample input: `\"Smith, John\",42,\"Notes with \"\"quotes\"\" inside\"`\n\nResult:\n- Field 1: `Smith, John` (comma preserved inside quotes)\n- Field 2: `42`\n- Field 3: `Notes with \"quotes\" inside` (embedded quotes normalized)\n\n**Disabled Quote Handling (true)**\n\nExample input: `\"Smith, John\",42,\"Notes with \"\"quotes\"\" inside\"`\n\nResult:\n- Field 1: `\"Smith`\n- Field 2: ` John\"`\n- Field 3: `42`\n- Field 4: `\"Notes with \"\"quotes\"\" inside\"`\n\n**Usage guidance for ai agents**\n\n**Recommend `disableQuoteAndStripEnclosingQuotes: true` when**\n\n1. **Quote-Related Parsing Problems**:\n    - Files contain inconsistent or malformed quote usage\n    - Source system doesn't follow standard CSV quoting rules\n    - Quotes appear as literal data rather than field delimiters\n    - Quotes are present but delimiters are never embedded in fields\n\n2. **Special Data Formats**:\n    - Working with custom delimited formats that don't use quotes for escaping\n    - Files use alternate escaping mechanisms for embedded delimiters\n    - Source system adds quotes to all fields regardless of content\n\n**Recommend `disableQuoteAndStripEnclosingQuotes: false` (default) when**\n\n1. **Standard CSV Compliance**:\n    - Files follow RFC 4180 or similar CSV standards\n    - Fields contain embedded delimiters that must be preserved\n    - Quotes are used properly to enclose fields with special characters\n    - Source is a standard database, spreadsheet, or business system export\n\n2. **Data Content Characteristics**:\n    - Fields contain embedded commas, newlines, or other delimiters\n    - Text fields might contain quotation marks as part of the content\n    - Preserving the exact structure of complex text fields is important\n\n**Troubleshooting indicators**\n\nConsider changing this setting when encountering these issues:\n\n- Field counts vary unexpectedly between rows\n- Text with embedded delimiters is being split into multiple fields\n- Quotes appearing at the beginning and end of every field in the result\n- Extra quote characters appearing within field values\n\n**Implementation notes**\n\n- This setting significantly changes parsing behavior - test thoroughly\n- Affects all fields in the file consistently\n- Incorrect setting can cause severe data misalignment\n- When field count inconsistency occurs, review this setting first\n"}}},"json":{"type":"object","description":"Configuration settings for parsing JSON (JavaScript Object Notation) files. This object defines how the system interprets and processes hierarchical data contained in JSON-formatted files.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"json\". This configuration is required for properly parsing:\n- Standard JSON files (.json)\n- JSON data exports from APIs or databases\n- JSON Lines format (newline-delimited JSON)\n- Nested or hierarchical data structures\n\n**Json parsing characteristics**\n\n- **Hierarchical Data**: JSON naturally supports nested objects and arrays\n- **Type Preservation**: Numbers, booleans, nulls, and strings are correctly typed\n- **Flexible Structure**: Can handle varying record structures\n- **Tree Navigation**: Supports complex object traversal via path expressions\n\n**Implementation strategy for ai agents**\n\n1. **Data Structure Analysis**:\n    - Examine sample files to understand the object hierarchy\n    - Identify where the actual records/rows are located in the structure\n    - Determine if records are at the root or nested within containers\n    - Check for array structures that contain the target records\n\n2. **Common JSON Data Patterns**:\n\n    **Root Array Pattern**\n    ```json\n    [\n      {\"id\": 1, \"name\": \"Product 1\"},\n      {\"id\": 2, \"name\": \"Product 2\"}\n    ]\n    ```\n    - Records are directly at the root as an array\n    - No resourcePath needed (leave blank)\n    - Most straightforward structure for processing\n\n    **Container Object Pattern**\n    ```json\n    {\n      \"data\": [\n        {\"id\": 1, \"name\": \"Product 1\"},\n        {\"id\": 2, \"name\": \"Product 2\"}\n      ],\n      \"metadata\": {\n        \"count\": 2,\n        \"page\": 1\n      }\n    }\n    ```\n    - Records are in an array inside a container object\n    - Requires resourcePath (e.g., \"data\")\n    - Common in API responses with metadata\n\n    **Nested Container Pattern**\n    ```json\n    {\n      \"response\": {\n        \"results\": [\n          {\"id\": 1, \"name\": \"Product 1\"},\n          {\"id\": 2, \"name\": \"Product 2\"}\n        ],\n        \"pagination\": {\n          \"nextPage\": 2\n        }\n      },\n      \"status\": \"success\"\n    }\n    ```\n    - Records are deeply nested in the hierarchy\n    - Requires dot notation in resourcePath (e.g., \"response.results\")\n    - Common in complex API responses\n\n**Error prevention**\n\n- **Invalid Path**: Incorrectly specified resourcePath results in zero records found\n- **Type Mismatch**: resourcePath must point to an array of objects for proper record processing\n- **Empty Results**: If path resolves to null or non-existent field, no error is thrown but no records are processed\n- **Parsing Failures**: Malformed JSON will cause the entire file processing to fail\n\n**Optimization opportunities**\n\n- For large JSON files, consider preprocessing to extract only relevant sections\n- For files with complex structures, validate the resourcePath with sample data\n- When processing API responses, coordinate resourcePath with the API documentation\n- For very large datasets, consider using streaming JSON parsing (NDJSON format)\n","properties":{"resourcePath":{"type":"string","description":"Specifies the path to the array of records within the JSON structure. This field helps the system locate and extract the target records when they're nested inside a larger JSON object hierarchy.\n\n**Behavior**\n\n- **Purpose**: Identifies where the array of records is located in the JSON structure\n- **Format**: Dot notation path to navigate nested objects (e.g., \"data.records\")\n- **When Empty**: System expects records to be at the root level as an array\n- **Result**: Array found at this path is processed as individual records\n\n**Path notation guidelines**\n\n**Basic Path Patterns**\n\n- **Root Level Array**: Leave empty or null when records are a direct array at root\n- **Single Level Nesting**: Use the property name (e.g., \"data\", \"results\", \"items\")\n- **Multi-Level Nesting**: Use dot notation (e.g., \"response.data.items\")\n\n**Path Construction Rules**\n\n1. **Object Navigation**:\n    - Use dots to traverse object properties: \"parent.child.grandchild\"\n    - Each segment must be a valid property name in the JSON\n\n2. **Target Requirements**:\n    - The path MUST resolve to an array of objects\n    - Each object in the array will be processed as one record\n    - The array must be the final element in the path\n\n3. **Limitations**:\n    - Array indexing is not supported in the path (e.g., \"data[0]\")\n    - Wildcard selectors are not supported\n    - Regular expressions are not supported\n\n**Determination strategy for ai agents**\n\nTo identify the correct resourcePath:\n\n1. **Examine Sample Data**:\n    - Open a sample JSON file or API response\n    - Locate the array containing the actual data records\n    - Note the full path from root to this array\n\n2. **Common Patterns by Source**:\n\n    | Source Type | Common Paths | Example |\n    |-------------|--------------|---------|\n    | REST APIs | \"data\", \"results\", \"items\" | \"data\" |\n    | Complex APIs | \"response.data\", \"data.items\" | \"response.data\" |\n    | Database Exports | \"rows\", \"records\", \"exports\" | \"rows\" |\n    | CRM Systems | \"contacts\", \"accounts\", \"opportunities\" | \"contacts\" |\n    | Analytics APIs | \"data.rows\", \"response.data.rows\" | \"data.rows\" |\n\n3. **Verification Approach**:\n    - The path should resolve to an array (typically with square brackets in the JSON)\n    - Each element in this array should represent one complete record\n    - The array should not be a property array (like tags or categories)\n\n**Implementation examples**\n\n**Root Array (No Path Needed)**\n\nJSON Structure:\n```json\n[\n  {\"id\": 1, \"name\": \"Record 1\"},\n  {\"id\": 2, \"name\": \"Record 2\"}\n]\n```\n\nConfiguration:\n```json\n\"resourcePath\": \"\"  // or omit entirely\n```\n\n**Single-Level Nesting**\n\nJSON Structure:\n```json\n{\n  \"orders\": [\n    {\"id\": \"A001\", \"customer\": \"John\"},\n    {\"id\": \"A002\", \"customer\": \"Jane\"}\n  ],\n  \"count\": 2\n}\n```\n\nConfiguration:\n```json\n\"resourcePath\": \"orders\"\n```\n\n**Multi-Level Nesting**\n\nJSON Structure:\n```json\n{\n  \"response\": {\n    \"data\": {\n      \"customers\": [\n        {\"id\": 1, \"name\": \"Acme Corp\"},\n        {\"id\": 2, \"name\": \"Globex Inc\"}\n      ]\n    },\n    \"status\": \"success\"\n  }\n}\n```\n\nConfiguration:\n```json\n\"resourcePath\": \"response.data.customers\"\n```\n\n**Troubleshooting indicators**\n\nIf you encounter these issues, review the resourcePath setting:\n\n- Export completes successfully but processes 0 records\n- \"Cannot read property 'forEach' of undefined\" errors\n- \"Expected array but got object/string/number\" errors\n- Records appear flattened or with unexpected structure\n\n**Best practices**\n\n- Always verify the path with sample data before deployment\n- Use the simplest path that reaches the target array\n- Document the expected JSON structure alongside the configuration\n- For APIs with changing response structures, implement validation checks\n"}}},"xlsx":{"type":"object","description":"Configuration settings for parsing Microsoft Excel (XLSX) files. This object defines how the system interprets and extracts data from Excel workbooks, handling their unique structures and formatting.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"xlsx\". This configuration is required for properly parsing:\n- Modern Excel files (.xlsx) using the Open XML format\n- Excel workbooks with multiple sheets\n- Files exported from Microsoft Excel or compatible applications\n- Spreadsheet data with formatting, formulas, or multiple worksheets\n\n**Excel parsing characteristics**\n\n- **Multiple Worksheets**: Can access data from specific sheets within workbooks\n- **Cell Formatting**: Handles various data types (text, numbers, dates, etc.)\n- **Formula Resolution**: Retrieves calculated values rather than formulas\n- **Data Extraction**: Converts tabular Excel data to structured records\n\n**Implementation strategy for ai agents**\n\n1. **File Analysis**:\n    - Determine if the source file is an actual .xlsx format (not .xls, .csv, etc.)\n    - Identify which worksheet contains the target data\n    - Check for header rows, merged cells, or other special formatting\n    - Note any preprocessing required (hidden rows, filtered data, etc.)\n\n2. **Common Excel File Patterns**:\n\n    **Standard Data Table**\n    - Data organized in clear rows and columns\n    - First row contains headers\n    - No merged cells or complex formatting\n    - Most straightforward to process\n\n    **Report-Style Workbook**\n    - Contains titles, headers, and possibly footers\n    - May have merged cells for headings\n    - Could have multiple tables on a single sheet\n    - May require specific sheet selection or row skipping\n\n    **Multi-Sheet Workbook**\n    - Data distributed across multiple worksheets\n    - May require multiple export configurations\n    - Often needs sheet name specification (via pre-processing)\n    - Common in financial or complex business reports\n\n**Limitations and considerations**\n\n- **Hidden Data**: Hidden rows/columns are still processed unless filtered\n- **Formatting Loss**: Visual formatting and styles are ignored\n- **Formula Handling**: Only calculated values are extracted, not formulas\n- **Non-Tabular Data**: Pivot tables and non-tabular layouts may cause issues\n- **Large Files**: Very large Excel files may require additional memory\n\n**Error prevention**\n\n- **Format Compatibility**: Ensure the file is modern .xlsx format, not legacy .xls\n- **Data Structure**: Verify data is in a consistent tabular format\n- **Special Characters**: Watch for special characters in header rows\n- **Empty Sheets**: Check that target worksheets contain actual data\n\n**Optimization opportunities**\n\n- For complex workbooks, consider pre-processing to simplify structure\n- For large files, extract only necessary worksheets/ranges before processing\n- When possible, use files with consistent tabular layouts\n- Consider converting Excel data to CSV format for simpler processing\n","properties":{"hasHeaderRow":{"type":"boolean","description":"Indicates whether the Excel file contains a header row with field names as the first row.\n\n**Behavior**\n\n- **When true** (default): First row is treated as field names, not data\n- **When false**: All rows including the first are treated as data records\n- Impacts field mapping, validation, and record counting\n\n**Implementation impact**\n\n**With Header Row (true)**\n\n- Field names from the header row can be referenced in mappings\n- Record count excludes the header row\n- First row of data is the second physical row in the spreadsheet\n- Column names are derived from the first row text values\n- Blank header cells may be auto-named (Column1, Column2, etc.)\n\n**Without Header Row (false)**\n\n- Fields are referenced by position/Excel column letters (A, B, C, etc.)\n- Record count includes all rows in the sheet\n- First row of data is the first physical row in the spreadsheet\n- Requires external schema or position-based mapping\n- All fields are given generic names (Column1, Column2, etc.)\n\n**Determination strategy for ai agents**\n\nTo determine if a header row exists and should be configured:\n\n1. **Visual Inspection**:\n    - Open the Excel file and examine the first row\n    - Look for descriptive labels rather than actual data values\n    - Check for formatting differences between the first row and others\n    - Header rows often use bold formatting or different background colors\n\n2. **Content Analysis**:\n    - Headers typically contain text while data rows may contain mixed types\n    - Headers often use naming conventions (camelCase, Title Case, etc.)\n    - Headers don't follow the pattern/format of subsequent data rows\n    - Headers rarely contain numeric-only values (unless they're codes)\n\n3. **Source Context**:\n    - Business reports almost always include headers\n    - Data exports from systems typically include column names\n    - Machine-generated data might skip headers\n    - Scientific or technical data sometimes omits headers\n\n**Usage guidance for ai agents**\n\n**Recommend `hasHeaderRow: true` when**\n\n1. **Standard Business Data**:\n    - Most business Excel files include headers\n    - Reports and exports from business systems\n    - Files intended for human readability\n    - When column names provide important context\n\n2. **Integration Requirements**:\n    - When field names are needed for mapping\n    - When data needs to be self-describing\n    - When header names match target system fields\n    - For maintaining field identity across systems\n\n**Recommend `hasHeaderRow: false` when**\n\n1. **Special Data Types**:\n    - Scientific or sensor data without labels\n    - Machine-generated output files\n    - Legacy system exports with position-based fields\n    - When all rows contain actual data values\n\n2. **Technical Scenarios**:\n    - When the first row contains required data\n    - When column positions are used for mapping\n    - When headers are inconsistent or misleading\n    - For maximum data extraction with minimal configuration\n\n**Implementation notes**\n\n- This setting affects all worksheets in multi-sheet processing\n- Excel column names with spaces or special characters may be normalized\n- Duplicate header names will be made unique with suffixes\n- Empty header cells will get automatically generated names\n- Maximum recommended header length: 64 characters\n- Consider pre-processing files without headers to add them for clarity\n"}}},"xml":{"type":"object","description":"Configuration settings for parsing XML (Extensible Markup Language) files. This object defines how the system navigates and extracts hierarchical data from XML documents, enabling processing of structured markup data.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"xml\". This configuration is required for properly parsing:\n- Standard XML files (.xml)\n- SOAP API responses and web service outputs\n- Industry-specific XML formats (EDI, NIEM, UBL, etc.)\n- Document-oriented data with hierarchical structure\n\n**Xml parsing characteristics**\n\n- **Hierarchical Structure**: Processes nested elements and attributes\n- **Schema Independence**: Works with or without formal XML schemas\n- **Node Selection**: Uses XPath to precisely target record elements\n- **Namespace Support**: Handles XML namespaces in complex documents\n\n**Implementation strategy for ai agents**\n\n1. **Document Analysis**:\n    - Examine the XML structure to identify repeating elements (records)\n    - Determine the hierarchical level where target records exist\n    - Identify any namespaces that must be addressed\n    - Note attributes vs. element content patterns\n\n2. **Common XML Data Patterns**:\n\n    **Simple Element List**\n    ```xml\n    <Records>\n      <Record id=\"1\">\n        <Name>Product 1</Name>\n        <Price>10.99</Price>\n      </Record>\n      <Record id=\"2\">\n        <Name>Product 2</Name>\n        <Price>20.99</Price>\n      </Record>\n    </Records>\n    ```\n    - Records are identical element types with similar structure\n    - Direct children of a container element\n    - XPath: `/Records/Record`\n\n    **Namespaced xml**\n    ```xml\n    <soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\">\n      <soap:Body>\n        <ns:GetCustomersResponse xmlns:ns=\"http://example.com/api\">\n          <ns:Customer id=\"1\">\n            <ns:Name>Acme Corp</ns:Name>\n          </ns:Customer>\n          <ns:Customer id=\"2\">\n            <ns:Name>Globex Inc</ns:Name>\n          </ns:Customer>\n        </ns:GetCustomersResponse>\n      </soap:Body>\n    </soap:Envelope>\n    ```\n    - Elements use XML namespaces\n    - Records are nested within service response structures\n    - XPath: `//ns:Customer` or `/soap:Envelope/soap:Body/ns:GetCustomersResponse/ns:Customer`\n\n    **Heterogeneous Records**\n    ```xml\n    <Feed>\n      <Entry type=\"product\">\n        <ProductId>123</ProductId>\n        <Name>Widget</Name>\n      </Entry>\n      <Entry type=\"category\">\n        <CategoryId>A5</CategoryId>\n        <Label>Supplies</Label>\n      </Entry>\n    </Feed>\n    ```\n    - Same element type may have different internal structures\n    - Usually identified by an attribute or child element type\n    - May require multiple export configurations\n    - XPath: `/Feed/Entry[@type=\"product\"]`\n\n**Xpath query formulation**\n\nXPath is a powerful language for selecting nodes in XML documents. When formulating a resourcePath:\n\n- **Absolute Paths** (starting with `/`): Select from the document root\n- **Relative Paths** (no leading `/`): Select from the current context\n- **Any-Level Selection** (`//`): Select matching nodes regardless of location\n- **Predicates** (`[]`): Filter elements based on attributes or content\n- **Attribute Selection** (`@`): Select attribute values instead of elements\n\n**Error prevention**\n\n- **Invalid XPath**: Test the resourcePath against sample data before deployment\n- **Namespace Issues**: Ensure proper namespace handling in complex documents\n- **Empty Results**: Verify that the XPath selects the intended nodes and not an empty set\n- **Encoding Problems**: Use the correct encoding setting for international content\n\n**Optimization opportunities**\n\n- For large XML files, use more specific XPaths to reduce processing overhead\n- For complex structures, consider preprocessing to simplify before parsing\n- For SOAP responses, extract just the response body before processing\n- For repeating integration, document the exact XPath with examples\n","properties":{"resourcePath":{"type":"string","description":"Specifies the XPath expression used to locate record elements within the XML document. This critical field determines which XML nodes are treated as individual records for processing.\n\n**Behavior**\n\n- **Purpose**: Identifies which elements in the XML represent individual records\n- **Format**: Uses XPath syntax to select nodes from the document structure\n- **Requirement**: MANDATORY for XML processing - no default value exists\n- **Result**: Each XML element matching the XPath is processed as one record\n\n**Xpath syntax guidance**\n\n**Core XPath Patterns**\n\n1. **Direct Child Selection** (`/Root/Element`):\n    ```xml\n    <Root>\n      <Element>Record 1</Element>\n      <Element>Record 2</Element>\n    </Root>\n    ```\n    - XPath: `/Root/Element`\n    - Selects elements that are direct children following exact path\n    - Most precise, requires exact hierarchy knowledge\n    - Recommended when structure is consistent and well-known\n\n2. **Any-Level Selection** (`//Element`):\n    ```xml\n    <Root>\n      <Section>\n        <Element>Record 1</Element>\n      </Section>\n      <Container>\n        <Element>Record 2</Element>\n      </Container>\n    </Root>\n    ```\n    - XPath: `//Element`\n    - Selects all matching elements regardless of location\n    - More flexible, works across varying structures\n    - Use when element hierarchy may vary or is unknown\n\n3. **Filtered Selection** (`//Element[@attr=\"value\"]`):\n    ```xml\n    <Root>\n      <Element type=\"product\">Record 1</Element>\n      <Element type=\"category\">Not a record</Element>\n      <Element type=\"product\">Record 2</Element>\n    </Root>\n    ```\n    - XPath: `//Element[@type=\"product\"]`\n    - Selects only elements matching both name and attribute criteria\n    - Precise targeting when elements have identifying attributes\n    - Useful for heterogeneous XML with type indicators\n\n**Advanced Selection Techniques**\n\n1. **Position-Based** (`/Root/Element[1]`):\n    - Selects first element only\n    - Use when only certain occurrences should be processed\n\n2. **Content-Based** (`//Element[contains(text(),\"Value\")]`):\n    - Selects elements containing specific text\n    - Useful for filtering based on content\n\n3. **Parent-Relative** (`//Parent[Child=\"Value\"]/Element`):\n    - Selects elements with specific sibling or parent conditions\n    - Powerful for complex structural conditions\n\n**Namespace handling**\n\nWhen working with namespaced XML:\n\n```xml\n<soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"\n              xmlns:ns=\"http://example.com/api\">\n  <soap:Body>\n    <ns:Response>\n      <ns:Customer>Record 1</ns:Customer>\n      <ns:Customer>Record 2</ns:Customer>\n    </ns:Response>\n  </soap:Body>\n</soap:Envelope>\n```\n\nThe system automatically handles namespaces, but for clarity and precision:\n\n1. **Namespace-Aware Path**:\n    - XPath: `/soap:Envelope/soap:Body/ns:Response/ns:Customer`\n    - Include namespace prefixes as they appear in the document\n\n2. **Namespace-Agnostic Path**:\n    - XPath: `//Customer` or `//*[local-name()=\"Customer\"]`\n    - Use when you want to ignore namespaces entirely\n\n**Determination strategy for ai agents**\n\n1. **Identify Record Elements**:\n    - Look for repeating elements that represent individual \"rows\" of data\n    - These elements typically have the same name and similar structure\n    - They often contain multiple child elements representing \"fields\"\n\n2. **Analyze Element Hierarchy**:\n    - Note the path from root to record elements\n    - Determine if records appear at consistent locations or vary\n    - Check if they need to be filtered by attributes or position\n\n3. **Test Path Specificity**:\n    - More specific paths reduce processing overhead but are less flexible\n    - More general paths (with `//`) are robust to structure changes but less efficient\n    - Balance specificity with flexibility based on source stability\n\n**Common xpath patterns by source**\n\n| Source Type | Common XPath Pattern | Example |\n|-------------|----------------------|---------|\n| SOAP APIs | `/Envelope/Body/*/Response/*` | `/soap:Envelope/soap:Body/ns:GetOrdersResponse/ns:Order` |\n| REST XML | `/Response/Results/*` | `/ApiResponse/Results/Customer` |\n| Feeds | `/Feed/Entry` or `/Feed/Item` | `/rss/channel/item` |\n| Documents | `//Section/Item` | `//Chapter/Paragraph` |\n| EDI/Business | `/Document/Transaction/Line` | `/Invoice/LineItems/Item` |\n\n**Troubleshooting indicators**\n\nIf you encounter these issues, review the resourcePath:\n\n- Export completes successfully but processes 0 records\n- Records contain unexpected or partial data\n- Only first level of data is extracted (missing nested content)\n- Namespace-related \"element not found\" errors\n\n**Implementation notes**\n\n- XPath is case-sensitive; element and attribute names must match exactly\n- Each matching element becomes a separate record for processing\n- Child elements become fields in the processed record\n- Attributes can be included in field data if needed\n- Namespaces are handled automatically but may require explicit prefixes\n- Testing with an XPath tool on sample data is highly recommended\n"}}},"fileDefinition":{"type":"object","description":"Configuration settings for parsing files using a predefined file definition. This object enables processing of complex, non-standard, or proprietary file formats that require specialized parsing logic beyond what the standard parsers (CSV, JSON, XML, etc.) can handle.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"filedefinition\". This approach is required for properly handling:\n- Legacy or proprietary file formats with complex structures\n- Fixed-width text files where field positions are defined by character positions\n- Electronic Data Interchange (EDI) documents (X12, EDIFACT, etc.)\n- Multi-record type files where different lines have different formats\n- Files requiring complex preprocessing or custom parsing logic\n\n**File definition characteristics**\n\n- **Custom Parsing Rules**: Applies predefined parsing logic to complex file formats\n- **Reusable Configurations**: References externally defined parsing rules that can be reused\n- **Complex Format Support**: Handles formats that standard parsers cannot process\n- **Specialized Processing**: Often used for industry-specific or legacy formats\n\n**Implementation strategy for ai agents**\n\n1. **Format Analysis**:\n    - Determine if the file format is standard (CSV, JSON, XML) or requires custom parsing\n    - Check if the format follows industry standards like EDI, SWIFT, or fixed-width\n    - Assess if there are multiple record types within the same file\n    - Identify if specialized logic is needed to interpret the file structure\n\n2. **File Definition Selection**:\n    - Verify that a suitable file definition has already been created in the system\n    - Check if existing file definitions match the format requirements\n    - Confirm the file definition ID from system administrators if needed\n    - Ensure the file definition is compatible with the export's needs\n\n**Use case scenarios**\n\n**Fixed-Width Files**\n\nFiles where each field has a specific starting position and length:\n```\nCUST00001JOHN     DOE       123 MAIN ST\nCUST00002JANE     SMITH     456 OAK AVE\n```\n- Fields are positioned by character count rather than delimiters\n- Requires precise position and length definitions\n- Common in legacy mainframe and banking systems\n\n**Edi Documents**\n\nElectronic Data Interchange formats for business transactions:\n```\nISA*00*          *00*          *ZZ*SENDER         *ZZ*RECEIVER       *...\nGS*PO*SENDER*RECEIVER*20210101*1200*1*X*004010\nST*850*0001\nBEG*00*SA*123456**20210101\n...\n```\n- Highly structured with segment identifiers and element separators\n- Contains multiple record types with different structures\n- Requires complex parsing rules and validation\n\n**Multi-Record Files**\n\nFiles containing different record types identified by indicators:\n```\nH|SHIPMENT|20210115|PRIORITY\nD|ITEM001|5|WIDGET|RED\nD|ITEM002|10|GADGET|BLUE\nT|2|15|COMPLETE\n```\n- Each line starts with a record type indicator\n- Different record types have different field structures\n- Requires conditional processing based on record type\n\n**Error prevention**\n\n- **Definition Mismatch**: Ensure the file definition matches the actual file format\n- **Missing Definition**: Verify the file definition exists before referencing it\n- **Access Issues**: Confirm the integration has permission to use the file definition\n- **Version Compatibility**: Check if file definition version matches current file format\n\n**Optimization opportunities**\n\n- Document which file definition is used and why it's appropriate for the file format\n- Consider creating purpose-specific file definitions for complex formats\n- Test file definitions with sample files before deploying in production\n- Maintain documentation of the file structure alongside the file definition reference\n","properties":{"_fileDefinitionId":{"type":"string","format":"objectId","description":"The unique identifier of the file definition to use for parsing the file. This ID references a preconfigured file definition resource that contains the detailed parsing instructions for a specific file format.\n\n**Field behavior**\n\n- **Purpose**: References an existing file definition resource in the system\n- **Format**: MongoDB ObjectId (24-character hexadecimal string)\n- **Requirement**: MANDATORY when type=\"filedefinition\"\n- **Validation**: Must reference a valid, accessible file definition\n\n**Understanding file definitions**\n\nA file definition is a separate resource that defines:\n\n1. **Record Structure**:\n    - Field names, positions, and data types\n    - Record identifiers and format specifications\n    - Parsing rules and field extraction logic\n\n2. **Processing Rules**:\n    - How to identify different record types\n    - How to handle headers, footers, and details\n    - Data validation and transformation rules\n\n3. **Format-Specific Settings**:\n    - For fixed-width: Character positions and field lengths\n    - For EDI: Segment identifiers and element separators\n    - For proprietary formats: Custom parsing instructions\n\n**Obtaining the correct id**\n\nTo identify the appropriate file definition ID:\n\n1. **System Administration**:\n    - Check with system administrators for a list of available file definitions\n    - Request the specific ID for the file format you need to process\n    - Verify the file definition's compatibility with your file format\n\n2. **File Definition Catalog**:\n    - If available, consult the file definition catalog in the system\n    - Search for definitions matching your file format requirements\n    - Note the ObjectId of the appropriate definition\n\n3. **Custom Definition Creation**:\n    - If no suitable definition exists, request creation of a new one\n    - Provide sample files and format specifications\n    - Obtain the new file definition's ID after creation\n\n**Implementation guidance for ai agents**\n\n**Recommendation Framework**\n\nWhen implementing a file definition-based export:\n\n1. **Verify Definition Existence**:\n    - Confirm the file definition exists before configuration\n    - Do not guess or generate random IDs\n    - Request specific ID from system administrators\n\n2. **Documentation Requirements**:\n    - Document which file definition is being used and why\n    - Note any specific requirements or limitations of the definition\n    - Record the mapping between file fields and integration needs\n\n3. **Testing Approach**:\n    - Recommend testing with sample files before production use\n    - Verify all required fields are correctly extracted\n    - Validate the parsing results meet integration requirements\n\n**Common File Definition Categories**\n\n| Category | Description | Example Formats |\n|----------|-------------|----------------|\n| Fixed-Width | Fields defined by character positions | Banking transactions, government reports |\n| EDI | Electronic Data Interchange standards | X12, EDIFACT, TRADACOMS |\n| Hierarchical | Complex parent-child structures | Specialized industry formats |\n| Multi-Record | Different record types in one file | Inventory systems, financial exports |\n| Proprietary | Custom or legacy system formats | Mainframe exports, specialized software |\n\n**Technical considerations**\n\n- File definitions are reusable across multiple exports\n- Changes to a file definition affect all exports using it\n- File definitions may have version dependencies\n- Some file definitions may require specific pre-processing settings\n- Performance impact varies based on definition complexity\n\n**Troubleshooting indicators**\n\nIf you encounter these issues, verify the file definition ID:\n\n- \"File definition not found\" errors\n- Unexpected field mapping or missing fields\n- Data type conversion errors\n- Parsing failures with specific record types\n\nAlways document the exact file definition ID with its purpose to facilitate troubleshooting and maintenance.\n"}}},"filter":{"allOf":[{"description":"Configuration for selectively processing files based on specified criteria. This object enables precise\ncontrol over which files are included or excluded from the export operation.\n\n**Filter behavior**\n\nWhen configured, the filter is applied before file processing begins:\n- Files that match the filter criteria are processed\n- Files that don't match are completely skipped\n- No partial file processing is performed\n\n**Available filter fields**\n\nThe specific fields available for file filtering are the contained in the `fileMeta` property.\n\n**Common Filter Fields**\n\nThese are the most commonly available fields across most file providers:\n\n1. **filename**: The name of the file (with extension)\n  - Example filter: Match files with specific extensions or naming patterns\n  - Usage: `[\"endswith\", [\"extract\", \"filename\"], \".csv\"]`\n\n2. **filesize**: The size of the file in bytes\n  - Example filter: Skip files that are too large or too small\n  - Usage: `[\"lessthan\", [\"number\", [\"extract\", \"filesize\"]], 1000000]`\n\n3. **lastmodified**: The last modification timestamp of the file\n  - Example filter: Process only files created/modified within a specific date range\n  - Usage: `[\"greaterthan\", [\"extract\", \"lastmodified\"], \"2023-01-01T00:00:00Z\"]`\n"},{"$ref":"#/components/schemas/Filter"}]},"backupPath":{"type":"string","description":"The file system path where backup files will be stored before processing. This path specifies a directory location where the system will create backup copies of files before they are processed by the export flow.\n\n**Backup mechanism overview**\n\nThe backup mechanism creates a copy of source files in the specified location before processing begins. This provides:\n\n- **Data Safety**: Preserves original files in case of processing errors\n- **Audit Trail**: Maintains historical record of exported data\n- **Recovery Option**: Enables reprocessing from original files if needed\n- **Compliance Support**: Helps meet data retention requirements\n\n**Path configuration guidelines**\n\nThe path format must follow these conventions:\n\n- **Absolute Paths**: Must start with \"/\" (Unix/Linux) or include drive letter (Windows)\n- **Relative Paths**: Interpreted relative to the application's working directory\n- **Network Paths**: Can use UNC format (\\\\server\\share\\path) or mounted network drives\n- **Access Requirements**: The path must be writable by the service account running the integration\n\n**Implementation strategy for ai agents**\n\nWhen configuring the backup path, consider these factors:\n\n1. **Storage Capacity Planning**:\n    - Estimate average file sizes and volumes\n    - Calculate required storage based on retention period\n    - Implement monitoring for storage utilization\n    - Plan for storage growth based on business projections\n\n2. **Path Selection Criteria**:\n    - Choose locations with sufficient disk space\n    - Ensure appropriate read/write permissions\n    - Select paths with reliable access (avoid temporary or volatile storage)\n    - Consider network latency for remote locations\n\n3. **Backup Naming Convention**:\n    - Default: Original filename with timestamp suffix\n    - Custom: Can be controlled through integration settings\n    - Avoid paths that may contain special characters that need escaping\n    - Consider filename length limitations of target filesystem\n\n4. **Security Considerations**:\n    - Restrict access to backup location to authorized personnel only\n    - Avoid public-facing directories\n    - Consider encryption for sensitive data backups\n    - Implement appropriate file permissions\n\n**Backup strategy recommendations**\n\n| Data Sensitivity | Recommended Approach | Path Considerations |\n|------------------|----------------------|---------------------|\n| Low | Local directory backup | Fast access, limited protection |\n| Medium | Network share with permissions | Balanced access/protection |\n| High | Secure storage with encryption | Highest protection, potential performance impact |\n| Regulated | Compliant storage with audit trail | Must meet specific regulatory requirements |\n\n**Integration patterns**\n\n**Temporary Processing Pattern**\n\nFor short-term processing needs:\n```\n/tmp/exports/backups\n```\n- Files stored temporarily during processing\n- Limited retention period\n- Optimized for processing speed\n- May be automatically cleaned up\n\n**Long-term Archival Pattern**\n\nFor regulatory or business retention requirements:\n```\n/archive/exports/2023/Q4\n```\n- Organized by time period\n- Structured for easy retrieval\n- May include additional metadata\n- Designed for long-term storage\n\n**Cloud Storage Pattern**\n\nFor scalable, managed storage:\n```\n/mnt/cloud/exports/client123\n```\n- Mounted cloud storage location\n- Potentially unlimited capacity\n- May include built-in versioning\n- Often includes automatic replication\n\n**Error handling guidance**\n\nWhen configuring backup paths, anticipate these common issues:\n\n- **Permission Denied**: Ensure service account has write access\n- **Path Not Found**: Verify directory exists or create it programmatically\n- **Disk Full**: Monitor storage capacity and implement alerts\n- **Path Too Long**: Be aware of filesystem path length limitations\n\n**Technical considerations**\n\n- Backup operations may impact performance for large files\n- Network paths may introduce latency and availability concerns\n- Some filesystems have case sensitivity differences (important for path matching)\n- Path separators vary by platform (/ vs \\)\n- Special characters in paths may require escaping in certain contexts\n- Consider implementing automatic cleanup policies for backups\n\n**System administration notes**\n\n- Backup paths should be included in system backup procedures\n- Monitor space utilization on backup volumes\n- Implement appropriate retention policies\n- Document backup path locations in system configuration\n- Consider periodic validation of backup file integrity\n"}}},"Filter":{"type":"object","description":"Configuration for selectively processing records based on specified criteria. This object enables\nprecise control over which items are included or excluded from processing operations.\n\n**Filter behavior**\n\nWhen configured, the filter is applied before processing begins:\n- Items that match the filter criteria are processed\n- Items that don't match are completely skipped\n- No partial processing is performed\n\n**Implementation approaches**\n\nThere are two distinct filtering mechanisms available:\n\n**Rule-Based Filtering (`type: \"expression\"`)**\n- **Best For**: Common filtering patterns based on standard attributes\n- **Capabilities**: Filter by names, values, dates, numerical ranges, text patterns\n- **Advantages**: Declarative, no coding required, consistent performance\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear, static criteria for selection\n\n**Script-Based Filtering (`type: \"script\"`)**\n- **Best For**: Complex logic, dynamic criteria, or business rules\n- **Capabilities**: Full programmatic control, access to complete metadata\n- **Advantages**: Maximum flexibility, can implement any filtering logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Simple rules aren't sufficient or logic needs to be dynamic\n","properties":{"type":{"type":"string","description":"Determines which filtering mechanism to use. This choice affects which properties\nmust be configured and how filtering logic is implemented.\n\n**Available types**\n\n**Rule-Based Filtering (`\"expression\"`)**\n- **Required Config**: The `expression` object with rule definitions\n- **Behavior**: Evaluates declarative rules against item attributes\n- **Best For**: Common patterns like name matching, date ranges, value limits\n- **Advantages**: Simpler to configure, no custom code required\n\n**Script-Based Filtering (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to determine which items to process\n- **Best For**: Complex conditions, business logic, dynamic criteria\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard filtering needs (name, size, date), use `\"expression\"`\n2. For complex logic or conditions not covered by expressions, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based filtering. This object enables filtering\nitems based on common attributes without requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define filtering rules that\ncan match against item attributes like name, type, value, date, and other properties.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Rules can be combined with AND/OR logic\n- Each rule can check a specific attribute\n- Multiple conditions can be applied (ranges, pattern matching, exact matches)\n\n**Common filter patterns**\n\n1. **Pattern matching**: Using wildcards like `*` and `?`\n2. **Value range filtering**: Numbers between min and max values\n3. **Date range filtering**: Items created/modified within specific time ranges\n4. **Status checking**: Items with specific status values or properties\n\nFor AI agents: Rule-based filtering should be your first choice when the filtering criteria\ncan be expressed in terms of standard attributes. Only use script-based filtering when\nmore complex logic is required.\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"1\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"1\"\nfor current implementations.\n","enum":["1"]},"rules":{"type":"array","description":"Expression array defining filter conditions using prefix notation. The first element is the operator,\nfollowed by its operands which may themselves be nested expression arrays.\n\nThe rule expression follows this pattern:\n- First element: Operator name (string)\n- Remaining elements: Operands for that operator (values or nested expressions)\n\n**Expression structure**\n\nFilter expressions use a prefix notation where operators appear before their operands:\n```\n[operator, operand1, operand2, ...]\n```\n\n**Comparison Operators**\n- `\"equals\"`: Exact match (equals)\n- `\"notequals\"`: Not equal to value (not equals)\n- `\"greaterthan\"`: Value is greater than specified value (is greater than)\n- `\"greaterthanequals\"`: Value is greater than or equal to specified value (is greater than or equals)\n- `\"lessthan\"`: Value is less than specified value (is less than)\n- `\"lessthanequals\"`: Value is less than or equal to specified value (is less than or equals)\n- `\"startswith\"`: String starts with specified prefix (starts with)\n- `\"endswith\"`: String ends with specified suffix (ends with)\n- `\"contains\"`: String contains specified substring (contains)\n- `\"doesnotcontain\"`: String does not contain specified substring (does not contain)\n- `\"isempty\"`: Field is empty or null (is empty)\n- `\"isnotempty\"`: Field contains a value (is not empty)\n- `\"matches\"`: Matches specified pattern (matches)\n\n**Logical Operators**\n- `\"and\"`: All conditions must be true\n- `\"or\"`: At least one condition must be true\n- `\"not\"`: Negates the condition\n\n**Field Access and Type Conversion**\n- `\"extract\"`: Access a field from the item by name\n- `\"settings\"`: Access a custom setting from the flow, flow step, or integration configuration\n- `\"boolean\"`: Convert value to Boolean type\n- `\"epochtime\"`: Convert value to Epoch Time (Unix timestamp)\n- `\"number\"`: Convert value to Number type\n- `\"string\"`: Convert value to String type\n\n**Field Access Details**\n\n**Using `extract` to access record fields:**\n- Retrieves values from the current record being processed\n- Can access nested properties using dot notation (e.g., `\"customer.email\"`)\n- Returns the raw field value which may need type conversion\n\n**Using `settings` to access configuration values:**\n- Retrieves values from the integration's configuration settings\n- Supports different scopes with prefix notation:\n  - `flow.settingName`: Access flow-level settings\n  - `export.settingName`: Access export-level settings\n  - `import.settingName`: Access import-level settings\n  - `integration.settingName`: Access integration-level settings\n- Useful for dynamic filtering based on configuration\n\n**Field Transformations**\n- `\"lowercase\"`: Convert string to lowercase\n- `\"uppercase\"`: Convert string to uppercase\n- `\"ceiling\"`: Round number up to the nearest integer\n- `\"floor\"`: Round number down to the nearest integer\n- `\"abs\"`: Get absolute value of a number\n\nType conversion operators are often necessary when comparing extracted field values against literals or when the field type doesn't match the comparison operator's expected type. For example:\n\n```json\n[\n  \"equals\",\n  [\n    \"number\",  // Convert to number before comparison\n    [\n      \"extract\",\n      \"quantity\"\n    ]\n  ],\n  100\n]\n```\n\nExample with datetime conversion:\n```json\n[\n  \"greaterthan\",\n  [\n    \"epochtime\",  // Convert to Unix timestamp before comparison\n    [\n      \"extract\",\n      \"createdDate\"\n    ]\n  ],\n  1609459200000  // January 1, 2021 as Unix timestamp in milliseconds\n]\n```\n\nExample with transformations:\n```json\n[\n  \"and\",\n  [\n    \"matches\",\n    [\n      \"lowercase\",  // Convert to lowercase before matching\n      [\n        \"string\",\n        [\n          \"extract\",\n          \"categories\"\n        ]\n      ]\n    ],\n    \"netsuite\"\n  ],\n  [\n    \"notequals\",\n    [\n      \"string\",\n      [\n        \"extract\",\n        \"recurrence.pattern.type\"\n      ]\n    ],\n    \"\"\n  ]\n]\n```\n\nExample comparing a record field with a flow setting:\n```json\n[\n  \"equals\",\n  [\n    \"string\",\n    [\n      \"extract\",\n      \"trantype\"\n    ]\n  ],\n  [\n    \"string\",\n    [\n      \"settings\",\n      \"flow.trantype\"\n    ]\n  ]\n]\n```\n\n**Examples**\n\nExample 1: Status field is not equal to \"cancelled\"\n```json\n[\n  \"notequals\",\n  [\n    \"extract\",\n    \"status\"\n  ],\n  \"cancelled\"\n]\n```\n\nExample 2: Filename starts with \"HC\"\n```json\n[\n  \"startswith\",\n  [\n    \"extract\",\n    \"filename\"\n  ],\n  \"HC\"\n]\n```\n\nExample 3: Amount is greater than 100\n```json\n[\n  \"greaterthan\",\n  [\n    \"number\",\n    [\n      \"extract\",\n      \"amount\"\n    ]\n  ],\n  100\n]\n```\n\nExample 4: Order date is after January 1, 2023\n```json\n[\n  \"greaterthan\",\n  [\n    \"extract\",\n    \"orderDate\"\n  ],\n  \"2023-01-01T00:00:00Z\"\n]\n```\n\nExample 5: Category contains any of [\"Urgent\", \"High Priority\"]\n```json\n[\n  \"anyof\",\n  [\n    \"extract\",\n    \"category\"\n  ],\n  [\"Urgent\", \"High Priority\"]\n]\n```\n","items":{"oneOf":[{"title":"String","type":"string"},{"title":"Number","type":"number"},{"title":"Boolean","type":"boolean"},{"title":"Object","type":"object"},{"title":"Array","type":"array"}]}}}},"script":{"type":"object","description":"Configuration for programmable script-based filtering. This object enables complex, custom\nfiltering logic beyond what expression-based filtering can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to determine which items\nshould be processed.\n\n**Implementation approach**\n\nScript-based filtering works by:\n1. Executing the specified function from the referenced script\n2. Passing item data to the function\n3. Using the function's return value (true/false) to determine inclusion\n\n**Common use cases**\n\nScript filtering is ideal for:\n- Complex business logic that can't be expressed as simple rules\n- Dynamic filtering criteria that change based on external factors\n- Content-based filtering that requires deep inspection\n- Advanced pattern matching beyond simple wildcards\n- Multi-stage filtering with intermediate logic\n\nFor AI agents: Only use script-based filtering when expression-based filtering is insufficient.\nScript filtering requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to the Script resource that contains the filtering logic. This must be a valid\nObjectId of a Script resource that exists in the system.\n\nThe referenced script must contain the function specified in the `function` field\nand must be written to handle filtering specifically. The script receives\nitem data as its input and must return a boolean value indicating whether\nto process the item (true) or skip it (false).\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n"},"function":{"type":"string","description":"Name of the function within the script to execute for filtering decisions. This function\nmust exist in the script referenced by _scriptId.\n\n**Function requirements**\n\nThe specified function must:\n- Accept item data as its first parameter\n- Return a boolean value (true to process the item, false to skip it)\n- Handle errors gracefully\n- Execute efficiently (as it may run for many items)\n\n**Function signature**\n\n```javascript\nfunction filterItems(itemData) {\n  // itemData contains properties of the item being evaluated\n  // Custom logic here\n  return true; // or false to skip the item\n}\n```\n\nFor AI agents: Ensure the function name exactly matches a function defined in the\nreferenced script, as mismatches will cause the filter to fail.\n"}}}}},"Http":{"type":"object","description":"Configuration for HTTP exports.\n\nIMPORTANT: When the _connectionId field points to a connection where the type is http, \nthis object MUST be populated for the export to function properly. This is a required configuration\nfor all HTTP based exports, as determined by the connection associated with the export.\n","properties":{"type":{"type":"string","enum":["file"],"description":"**Important:** This field should be LEFT UNDEFINED for the vast majority of HTTP exports.\n\nThis is an OPTIONAL field that should only be set in rare, specific cases. For standard REST API exports\n(Shopify, Salesforce, NetSuite, custom REST APIs, etc.), this field MUST be left undefined.\n\n**When to leave this field undefined (MOST common CASE)**\n\nLeave this field undefined for ALL standard data exports, including:\n- REST API exports that return JSON records\n- APIs that return XML records or structured data\n- Any export that retrieves business records, entities, or data objects\n- Standard CRUD operations that return record collections\n- GraphQL queries that return structured data\n- SOAP APIs that return structured responses\n\nExamples of exports that should have this field undefined:\n- \"Export all Shopify Customers\" → undefined (returns JSON customer records)\n- \"Retrieve orders from custom REST API\" → undefined (returns JSON order records)\n\n**When to set this field to 'file' (RARE use CASE)**\n\nSet this field to 'file' ONLY when the HTTP endpoint is specifically designed to download files:\n- The endpoint returns raw binary file content (PDFs, images, ZIP files, etc.)\n- The endpoint is a file download service (e.g., downloading invoices, reports, attachments)\n- The response body contains file data that needs to be saved as a file, not parsed as records\n- You need to download and process files from a remote server\n\nExamples of when to set type: \"file\":\n- \"Download PDF invoices from the API\" → type: \"file\"\n- \"Retrieve image files from a file server\" → type: \"file\"\n- \"Download CSV files from an FTP server via HTTP\" → type: \"file\"\n\n**Implementation details**\n\nWhen this field is set to 'file':\n- The 'file' object property MUST also be configured\n- The export appears as a \"Transfer\" step in the Flow Builder UI\n- The system applies file-specific processing to the HTTP response\n- Downstream steps receive file content rather than record data\n\nWhen this field is undefined (default for most exports):\n- The export appears as a standard \"Export\" step in the Flow Builder UI\n- The system parses the HTTP response as structured data (JSON, XML, etc.)\n- Downstream steps receive record data that can be mapped and transformed\n\n**Decision flowchart**\n\n1. Does the API endpoint return business records/entities (customers, orders, products, etc.)?\n   → YES: Leave this field undefined\n2. Does the API endpoint return structured data (JSON objects, XML records)?\n   → YES: Leave this field undefined\n3. Does the API endpoint return raw file content (PDFs, images, binary data)?\n   → YES: Set this field to \"file\" (and configure the 'file' property)\n\nRemember: When in doubt, leave this field undefined. Most HTTP exports are standard data exports.\n"},"method":{"type":"string","description":"HTTP method used for the export request to retrieve data from the target API.\n\n- GET: Most commonly used for data retrieval operations (default)\n- POST: Used when request body criteria are needed, especially for RPC or SOAP/XML APIs\n- PUT: Available for specific APIs that support it for data retrieval\n- PATCH/DELETE: Less common for exports but available for specialized use cases\n\nConsult your target API's documentation to determine the appropriate method.\n","enum":["GET","POST","PUT","PATCH","DELETE"]},"relativeURI":{"type":"string","description":"The resource path portion of the API endpoint used for this export.\n\nThis value is combined with the baseURI defined in the associated connection to form the complete API endpoint URL. \n\nThe entire relativeURI can be defined using handlebars expressions to create dynamic paths:\n\nExamples:\n- Simple resource paths: \"/products\", \"/orders\", \"/customers\"\n- With query parameters: \"/orders?status=pending\", \"/products?category=electronics&limit=100\"\n- With path parameters: \"/customers/{{record.customerId}}/orders\", \"/accounts/{{record.accountId}}/transactions\"\n- With dynamic query values: \"/orders?since={{lastExportDateTime}}\"\n- Fully dynamic path: \"{{record.dynamicPath}}\"\n\nPath parameters, query parameters, or the entire URI can be dynamically generated using handlebars syntax. This is particularly useful for parameterized API calls or when the endpoint needs to be determined at runtime based on data or context.\n\n**Lookup export behavior with mappings**\n\n**CRITICAL**: For lookup exports (isLookup: true) that have mappings configured, the handlebars template evaluation for relativeURI always uses the **original input record** before any mapping transformations are applied.\n\nThis design ensures that:\n- Mappings can transform the record structure for the request body without affecting URI construction\n- Essential fields like record IDs remain accessible for building dynamic endpoints\n- The request body can be optimized for the target API while preserving URI parameters\n\n**Example Scenario:**\n```\nInput record: {\"customerId\": \"12345\", \"name\": \"John Doe\", \"email\": \"john@example.com\"}\nMappings: Transform to {\"customer_name\": \"John Doe\", \"contact_email\": \"john@example.com\"}\nrelativeURI: \"/customers/{{record.customerId}}/details\"\nResult: \"/customers/12345/details\" (uses original customerId, not mapped version)\n```\n\nThis prevents situations where mapping transformations would remove or rename fields needed for endpoint construction, ensuring reliable API calls regardless of how the request body is structured.\n"},"headers":{"type":"array","description":"Export-specific HTTP headers to include with API requests. Note that common headers like authentication are typically defined on the connection record rather than here.\n\nUse this field only for headers that are specific to this particular export operation. Headers defined here will be merged with (and can override) headers from the connection.\n\nExamples of export-specific headers:\n- Accept: To request specific content format for this export only\n- X-Custom-Filter: Export-specific filtering parameters\n                \nHeader values can be defined using handlebars expressions if you need to reference any dynamic data or configurations.\n\nFor lookup exports (isLookup: true) with mappings configured, header value templates render against the **pre-mapped** record (the original input record from the upstream flow step) — mappings do not rewrite header evaluation.\n","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}}},"requestMediaType":{"type":"string","description":"Override request media type. Use this field to handle the use case where the HTTP request requires a different media type than what is configured on the connection.\n\nMost APIs use a consistent media type across all endpoints, which should be configured at the connection resource. Use this field only when:\n\n- This specific endpoint requires a different format than other endpoints in the API\n- You need to override the connection-level setting for this particular export only\n\nCommon values:\n- \"json\": For JSON request bodies (Content-Type: application/json)\n- \"xml\": For XML request bodies (Content-Type: application/xml)\n- \"urlencoded\": For URL-encoded form data (Content-Type: application/x-www-form-urlencoded)\n- \"form-data\": For multipart form data, typically used for file uploads\n- \"plaintext\": For plain text content\n","enum":["json","xml","urlencoded","form-data","plaintext"]},"body":{"type":"string","description":"The HTTP request body to send with POST, PUT, or PATCH requests. This field is typically used to:\n\n1. Send query parameters to APIs that require them in the request body (e.g., GraphQL or SOAP APIs)\n2. Provide filtering criteria for data exports\n\nThe body content must match the format specified in the requestMediaType field (JSON, XML, etc.).\n\nYou can use handlebars expressions to create dynamic content:\n```\n{\n  \"query\": \"SELECT Id, Name FROM Account WHERE LastModifiedDate > {{lastExportDateTime}}\",\n  \"parameters\": {\n    \"customerId\": \"{{record.customerId}}\",\n    \"limit\": 100\n  }\n}\n```\n\nFor XML or SOAP requests:\n```\n<request>\n  <filter>\n    <updatedSince>{{lastExportDateTime}}</updatedSince>\n    <type>{{record.type}}</type>\n  </filter>\n</request>\n```\n"},"successMediaType":{"type":"string","description":"Specifies the media type (content type) expected in successful responses for this specific export. This field should only be used when:\n\n1. The response format differs from the request format\n\nMost APIs return responses in the same format as the request, so this field is often unnecessary.\n\nCommon values:\n- \"json\": For JSON responses (typically with Content-Type: application/json)\n- \"xml\": For XML responses (typically with Content-Type: application/xml)\n- \"csv\": For CSV data (typically with Content-Type: text/csv)\n- \"plaintext\": For plain text responses\n","enum":["json","xml","csv","plaintext"]},"errorMediaType":{"type":"string","description":"Specifies the media type (content type) expected in error responses for this specific export. This field should only be used when:\n\n1. Error response format differs from the request format\n\nMost APIs return responses in the same format as the request, so this field is often unnecessary.\n\nCommon values:\n- \"json\": For JSON error responses (most common in modern APIs)\n- \"xml\": For XML error responses (common in SOAP and older REST APIs)\n- \"plaintext\": For plain text error messages\n","enum":["json","xml","plaintext"]},"_asyncHelperId":{"type":"string","format":"objectId","description":"Reference to an AsyncHelper resource that handles polling for long-running API operations.\n\nAsync helpers bridge Celigo's synchronous flow engine with asynchronous external APIs that use a \"fire-and-check-back\" pattern (HTTP 202 responses, job tickets, feed/document IDs, etc.).\n\nUse this field when the export needs to:\n- Submit a request to an API that processes data asynchronously\n- Poll for status at configured intervals\n- Retrieve results once the external process completes\n\nCommon use cases include:\n- Amazon SP-API feeds\n- Large report generators\n- File conversion services\n- Image processors\n- Any API that needs minutes or hours to complete a requested operation\n"},"once":{"type":"object","description":"HTTP configuration specific to Once exports. Used to mark records as exported after successful processing.","properties":{"relativeURI":{"type":"string","description":"The relative URI used to mark records as exported. Called as a callback to the source system after successful processing.\n\n- Must be a relative path starting with \"/\"\n- Can include Handlebars variables: \"/orders/{{record.Id}}/exported\"\n- Common patterns: dedicated status endpoint or record-specific updates\n- Renders against the **pre-mapped** record (original extracted record); mappings do not apply to the callback URI.\n"},"method":{"type":"string","description":"The HTTP method used when calling back to mark records as exported.","enum":["GET","PUT","POST","PATCH","DELETE"]},"body":{"type":"string","description":"The HTTP request body used when calling back to mark records as exported. Can include Handlebars expressions for dynamic values."}}},"paging":{"type":"object","description":"Configuration object for navigating through multi-page API responses.\n\n**Overview for ai agents**\n\nThis object is critical for retrieving large datasets that cannot be returned in a single API response.\nThe pagination implementation determines how the system will retrieve subsequent pages of data after\nthe first request, enabling complete data collection regardless of volume.\n\n**Key decision points**\n\n1. **Identify the API's pagination mechanism** (check API documentation)\n2. **Select the corresponding method** value (most important field)\n3. **Configure the required fields** based on your selected method\n4. **Add pagination variables** to your request configuration\n5. **Consider last page detection** options if needed\n\n**Field dependencies by pagination method**\n\n1. **page**: Page number-based pagination (e.g., ?page=2)\n    - Required: Set `method` to \"page\"\n    - Optional: `page` - Set if first page index is not 0 (e.g., set to 1 for APIs that start at page 1)\n    - Optional: `maxPagePath` - Path to find total pages in response\n    - Optional: `maxCountPath` - Path to find total records in response\n    - Optional: `relativeURI` - Only if subsequent page URLs differ from first page\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n2. **skip**: Offset/limit pagination (e.g., ?offset=100&limit=50)\n    - Required: Set `method` to \"skip\"\n    - Optional: `skip` - Set if first skip index is not 0\n    - Optional: `maxPagePath` - Path to find total pages in response\n    - Optional: `maxCountPath` - Path to find total records in response\n    - Optional: `relativeURI` - Only if subsequent page URLs differ from first page\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n3. **token**: Token-based pagination (e.g., ?page_token=abc123)\n    - Required: Set `method` to \"token\"\n    - Required: `path` - Location of the token in the response\n    - Required: `pathLocation` - Whether token is in \"body\" or \"header\"\n    - Optional: `token` - Set to provide initial token (rare)\n    - Optional: `pathAfterFirstRequest` - Only if token location changes after first page\n    - Optional: `relativeURI` - Only if subsequent page URLs differ from first page\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n4. **linkheader**: Link header pagination (uses HTTP Link header with rel values)\n    - Required: Set `method` to \"linkheader\"\n    - Optional: `linkHeaderRelation` - Set if relation is not the default \"next\"\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n5. **nextpageurl**: Complete next URL in response\n    - Required: Set `method` to \"nextpageurl\"\n    - Required: `path` - Location of the next URL in the response\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n6. **relativeuri**: Custom relative URI pagination\n    - Required: Set `method` to \"relativeuri\"\n    - Required: `relativeURI` - Configure using handlebars with previous_page context\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n7. **body**: Custom request body pagination\n    - Required: Set `method` to \"body\"\n    - Required: `body` - Configure using handlebars with previous_page context\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n**Pagination variables**\n\nBased on your selected method, you MUST add one of these variables to your request configuration:\n\n- For page-based: Add `{{export.http.paging.page}}` to the URI or body\n- For offset-based: Add `{{export.http.paging.skip}}` to the URI or body\n- For token-based: Add `{{export.http.paging.token}}` to the URI or body\n\n**Last page detection options**\n\nThese fields can be used with any pagination method to detect the last page:\n\n- `lastPageStatusCode` - Detect last page by HTTP status code\n- `lastPagePath` - JSON path to check for last page indicator\n- `lastPageValues` - Values at lastPagePath that indicate last page\n\n**Common implementation patterns**\n\nMost APIs require only 2-3 fields to be configured. The most common patterns are:\n\n```json\n// Page-based pagination (starting at page 1)\n{\n  \"method\": \"page\",\n  \"page\": 1\n}\n\n// Token-based pagination\n{\n  \"method\": \"token\",\n  \"path\": \"meta.nextToken\",\n  \"pathLocation\": \"body\"\n}\n\n// Link header pagination (simplest to configure)\n{\n  \"method\": \"linkheader\"\n}\n```\n\nIMPORTANT: Incorrect pagination configuration is one of the most common causes of incomplete data retrieval. Take time to properly identify and configure the correct pagination method for your API.\n","properties":{"method":{"type":"string","description":"Defines the pagination strategy that will be used to retrieve all data pages.\n\n**Importance for** AI AGENTS\n\nThis is the MOST CRITICAL field in pagination configuration. It determines:\n- Which other fields are required vs. optional\n- How subsequent pages will be requested\n- Which pagination variables must be used in requests\n- How the system detects the last page\n\n**Pagination methods and their requirements**\n\n**Page-Based** Pagination (`\"page\"`)\n```\n\"method\": \"page\"\n```\n- **Implementation**: Uses increasing page numbers (e.g., ?page=1, ?page=2)\n- **Required Setup**: Add `{{export.http.paging.page}}` to your URI or body\n- **Common Fields**: page (if starting at 1 instead of 0)\n- **API Examples**: Most REST APIs, Shopify, WordPress\n- **When to Use**: APIs that accept a page number parameter\n\n**Offset/Skip Pagination (`\"skip\"`)**\n```\n\"method\": \"skip\"\n```\n- **Implementation**: Uses increasing offset values (e.g., ?offset=0, ?offset=100)\n- **Required Setup**: Add `{{export.http.paging.skip}}` to your URI or body\n- **Common Fields**: Usually none (system handles offset increments)\n- **API Examples**: MongoDB, SQL-based APIs\n- **When to Use**: APIs that use offset/limit or skip/limit parameters\n\n**Token-Based Pagination (`\"token\"`)**\n```\n\"method\": \"token\"\n```\n- **Implementation**: Passes tokens from previous responses to get next pages\n- **Required Setup**: \n    1. Add `{{export.http.paging.token}}` to your URI or body\n    2. Set path to location of token in response\n    3. Set pathLocation to \"body\" or \"header\"\n- **API Examples**: AWS, Google Cloud, modern REST APIs\n- **When to Use**: APIs that provide continuation tokens/cursors\n\n**Link Header Pagination (`\"linkheader\"`)**\n```\n\"method\": \"linkheader\"\n```\n- **Implementation**: Follows URLs in HTTP Link headers automatically\n- **Required Setup**: None (simplest to configure)\n- **Common Fields**: Usually none (automatic)\n- **API Examples**: GitHub, GitLab, any API following RFC 5988\n- **When to Use**: APIs that return Link headers with rel=\"next\"\n\n**Next Page url (`\"nextpageurl\"`)**\n```\n\"method\": \"nextpageurl\"\n```\n- **Implementation**: Uses complete URLs returned in response body\n- **Required Setup**: Set path to location of next URL in response\n- **API Examples**: Some social media APIs, GraphQL implementations\n- **When to Use**: APIs that include complete next page URLs in responses\n\n**Custom relative URI (`\"relativeuri\"`)**\n```\n\"method\": \"relativeuri\"\n```\n- **Implementation**: Builds custom URIs based on previous responses\n- **Required Setup**: Configure relativeURI with handlebars templates\n- **When to Use**: Non-standard pagination requiring custom logic\n\n**Custom Request Body (`\"body\"`)**\n```\n\"method\": \"body\"\n```\n- **Implementation**: Creates custom request bodies for pagination\n- **Required Setup**: Configure body with handlebars templates\n- **API Examples**: GraphQL, SOAP, RPC APIs\n- **When to Use**: APIs requiring POST requests with pagination in body\n\n**Selection guidance**\n\nTo determine the correct method:\n1. Check the API documentation for pagination instructions\n2. Look for examples of multi-page requests in API samples\n3. Test with a small request to observe pagination mechanics\n4. Choose the method matching the API's expected behavior\n\nIMPORTANT: Using the wrong pagination method will result in either errors or incomplete data retrieval.\n","enum":["linkheader","page","skip","token","nextpageurl","relativeuri","body"]},"page":{"type":"integer","description":"Specifies the starting page number for page-based pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"page\"\n- OPTIONAL: Defaults to 0 if not provided\n- COMMON VALUES: 1 (most APIs), 0 (zero-indexed APIs)\n\n**Implementation guidance**\n\nThis field should be set when the API's first page is not zero-indexed. Most APIs use 1 as \ntheir first page number, in which case you should set:\n\n```json\n{\n  \"method\": \"page\",\n  \"page\": 1\n}\n```\n\nThe system will automatically increment this value for each subsequent page request.\n\n**Examples**\n\n- Shopify uses page=1 for first page\n- Some GraphQL APIs use page=0 for first page\n"},"skip":{"type":"integer","description":"Specifies the starting offset value for offset/skip-based pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"skip\"\n- OPTIONAL: Defaults to 0 if not provided\n- COMMON VALUES: 0 (vast majority of APIs)\n\n**Implementation guidance**\n\nThis field rarely needs to be set since most APIs use 0 as the starting offset.\nThe system will automatically increment this value by the pageSize for each subsequent request.\n\nExample calculation for page transitions:\n- First page: offset=0 (or your configured value)\n- Second page: offset=pageSize\n- Third page: offset=pageSize*2\n\n**When to use**\n\nOnly set this if the API requires a non-zero starting offset value, which is very uncommon.\n"},"token":{"type":"string","description":"Specifies an initial token value for token-based pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"token\"\n- OPTIONAL: Leave empty for normal pagination from the beginning\n- ADVANCED USE ONLY: Most implementations should NOT set this\n\n**Implementation guidance**\n\nToken-based pagination normally works by:\n1. Making the first request with no token\n2. Extracting a token from the response (using the path field)\n3. Using that token for the next request\n\nThis field should ONLY be set in rare scenarios:\n- Resuming a previous pagination sequence from a known token\n- APIs that require a token value even for the first request\n- Testing specific pagination scenarios\n\n**Example scenarios**\n\n```json\n// To resume pagination from a specific point:\n{\n  \"method\": \"token\",\n  \"path\": \"meta.nextToken\",\n  \"pathLocation\": \"body\",\n  \"token\": \"eyJwYWdlIjozfQ==\"\n}\n\n// For APIs requiring an initial token:\n{\n  \"method\": \"token\",\n  \"path\": \"pagination.nextToken\",\n  \"pathLocation\": \"body\",\n  \"token\": \"start\"\n}\n```\n"},"path":{"type":"string","description":"Specifies the location of pagination information in API responses.\n\n**Field behavior**\n\nThis field has different requirements based on the pagination method:\n\n- REQUIRED for method=\"token\":\n  Indicates where to find the token for the next page\n\n- REQUIRED for method=\"nextpageurl\":\n  Indicates where to find the complete URL for the next page\n\n- NOT USED for other pagination methods\n\n**Implementation guidance**\n\n**For token-based pagination (method=\"token\")**\n\n1. When pathLocation=\"body\":\n    - Set to a JSON path that points to the token in the response body\n    - Uses dot notation to navigate JSON objects\n    \n    Example response:\n    ```json\n    {\n      \"data\": [...],\n      \"meta\": {\n        \"nextToken\": \"abc123\"\n      }\n    }\n    ```\n    Correct path: \"meta.nextToken\"\n\n2. When pathLocation=\"header\":\n    - Set to the exact name of the HTTP header containing the token\n    - Case-sensitive, must match the header exactly\n    \n    Example header:\n    ```\n    X-Pagination-Token: abc123\n    ```\n    Correct path: \"X-Pagination-Token\"\n\n**For next page url pagination (method=\"nextpageurl\")**\n\n- Set to a JSON path that points to the complete URL in the response\n\nExample response:\n```json\n{\n  \"data\": [...],\n  \"pagination\": {\n    \"next_url\": \"https://api.example.com/data?page=2\"\n  }\n}\n```\nCorrect path: \"pagination.next_url\"\n\n**Common error** PATTERNS\n\n1. Missing dot notation: \"meta.nextToken\" not \"meta/nextToken\"\n2. Incorrect case: \"Meta.NextToken\" when API returns \"meta.nextToken\"\n3. Missing array indices when needed: \"items[0].next\" not \"items.next\"\n"},"pathLocation":{"type":"string","description":"Specifies where to find the pagination token in the API response.\n\n**Field behavior**\n\n- REQUIRED for method=\"token\"\n- NOT USED for other pagination methods\n- LIMITED to two possible values: \"body\" or \"header\"\n\n**Implementation guidance**\n\nWhen using token-based pagination, you must:\n1. Set method=\"token\"\n2. Set path to locate the token\n3. Set pathLocation to indicate where the token is found\n\n**When to use** \"body\":\n\nSet to \"body\" when the token is contained in the JSON response body.\nThis is the most common scenario for modern APIs.\n\nExample configuration:\n```json\n{\n  \"method\": \"token\",\n  \"path\": \"metadata.nextToken\",\n  \"pathLocation\": \"body\"\n}\n```\n\n**When to use \"header\"**\n\nSet to \"header\" when the token is returned as an HTTP header.\n\nExample configuration:\n```json\n{\n  \"method\": \"token\",\n  \"path\": \"X-Next-Page-Token\",\n  \"pathLocation\": \"header\" \n}\n```\n\n**Dependency chain**\n\nThis field participates in a critical dependency chain:\n\n1. Set method=\"token\"\n2. Set pathLocation=\"body\" or \"header\"\n3. Set path to token location based on pathLocation value\n4. Add {{export.http.paging.token}} to URI or body parameters\n\nAll four elements must be properly configured for token pagination to work.\n","enum":["body","header"]},"pathAfterFirstRequest":{"type":"string","description":"Specifies an alternative path for token extraction after the first page request.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"token\"\n- OPTIONAL: Only needed when token location changes after first page\n- Uses same format as the path field (JSON path or header name)\n\n**Implementation guidance**\n\nThis field should only be set when the API changes its response structure between\nthe first page and subsequent pages. Most APIs maintain consistent structure, but\nsome APIs may:\n\n1. Use different response formats for first vs. subsequent pages\n2. Move the token to a different location after the initial response\n3. Change the field name for the token in follow-up responses\n\nExample scenario where this is needed:\n```json\n// First page response:\n{\n  \"data\": [...],\n  \"meta\": {\n    \"initialNextToken\": \"abc123\"\n  }\n}\n\n// Subsequent page responses:\n{\n  \"data\": [...],\n  \"pagination\": {\n    \"nextToken\": \"def456\"\n  }\n}\n```\n\nIn this case:\n- path = \"meta.initialNextToken\" (for first page)\n- pathAfterFirstRequest = \"pagination.nextToken\" (for subsequent pages)\n\n**Dependency chain**\n\nThis field works in conjunction with the main path field:\n1. First request: token is extracted using the path field\n2. Subsequent requests: token is extracted using pathAfterFirstRequest\n\nIMPORTANT: Only set this field if you've verified that the API actually changes\nits response structure. Setting it unnecessarily can cause pagination to fail.\n"},"relativeURI":{"type":"string","description":"Override relative URI for subsequent page requests. This field appears as \"Override relative URI for subsequent page requests\" in the UI.\n\nThis field only needs to be set if subsequent page requests require a different relative URI than what is configured in the primary relative URI field. Most APIs use the same endpoint for all pages and vary only the query parameters, but some may require a completely different path for subsequent requests.\n\nYou can use handlebars expressions to reference data from the previous API response using the `previous_page` context object, which contains:\n\n- `previous_page.full_response` - The entire JSON response body from the previous request\n- `previous_page.last_record` - The last record from the previous page of results\n- `previous_page.headers` - All HTTP headers from the previous response\n\nCommon patterns include:\n- `{{previous_page.full_response.next_page}}` - Use a complete next page URL returned by the API\n- `/customers?page={{previous_page.full_response.page_count}}` - Use a page number from the response\n- `/orders?cursor={{previous_page.full_response.next_cursor}}` - Use a cursor/token from the response\n\nThe exact structure of data available depends on your specific API's response format.\n\nLeave this field empty if the main relative URI can be used for all page requests.\n"},"body":{"type":"string","description":"Override HTTP request body for subsequent page requests. This field appears as \"Override HTTP request body for subsequent page requests\" in the UI.\n\nThis field only needs to be set if subsequent page requests require a different HTTP request body than what is configured in the primary HTTP request body field. Most APIs use query parameters for pagination, but some (especially GraphQL or SOAP APIs) may require pagination parameters to be sent in the request body.\n\nYou can use handlebars expressions to reference data from the previous API response using the `previous_page` context object, which contains:\n\n- `previous_page.full_response` - The entire JSON response body from the previous request\n- `previous_page.last_record` - The last record from the previous page of results\n- `previous_page.headers` - All HTTP headers from the previous response\n\nCommon patterns include:\n- Including the next cursor in a GraphQL query: `{\"query\": \"...\", \"variables\": {\"cursor\": \"{{previous_page.full_response.pageInfo.endCursor}}\"}}`\n- Using the last record's ID: `{\"after\": \"{{previous_page.last_record.id}}\", \"limit\": 100}`\n- Including a page number: `{\"page\": {{previous_page.full_response.meta.next_page}}, \"pageSize\": 50}`\n\nThe exact structure of data available depends on your specific API's response format.\n\nLeave this field empty if the main HTTP request body can be used for all page requests.\n"},"linkHeaderRelation":{"type":"string","description":"Specifies which relation in the Link header to use for pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"linkheader\"\n- OPTIONAL: Defaults to \"next\" if not provided\n- Case-sensitive value matching the rel attribute in Link header\n\n**Implementation guidance**\n\nLink header pagination follows the RFC 5988 standard where pagination links\nare provided in HTTP headers. A typical Link header looks like:\n\n```\nLink: <https://api.example.com/items?page=2>; rel=\"next\", <https://api.example.com/items?page=1>; rel=\"prev\"\n```\n\nThis field allows you to specify which relation type to follow for pagination:\n\n```\n\"linkHeaderRelation\": \"next\"  // Default value\n```\n\nSome APIs use non-standard relation names, which is when you'd need to change this:\n\n```\n\"linkHeaderRelation\": \"successor\"  // Custom relation name\n```\n\n**Common values**\n\n- \"next\" (default): Standard for most RFC 5988 compliant APIs\n- \"successor\": Alternative used by some APIs\n- \"forward\": Alternative used by some APIs\n- \"nextpage\": Non-standard but used by some implementations\n\nIMPORTANT: This is case-sensitive and must exactly match the relation value in\nthe Link header. If the API includes the prefix \"rel=\" in the header, do NOT\ninclude it here.\n"},"resourcePath":{"type":"string","description":"Override path to records for subsequent page requests. This field appears as \"Override path to records for subsequent page requests\" in the UI.\n\nThis field only needs to be set if subsequent page requests return a different response structure, and the records are located in a different place than the original request.\n\nFor example, if the first request returns records in a structure like {\"data\": [...]} but subsequent page responses have records in {\"results\": [...]} instead, you would set this field to \"results\" to correctly extract data from the follow-up pages.\n\nLeave this field empty if all pages use the same response structure.\n"},"lastPageStatusCode":{"type":"integer","description":"Specifies a custom HTTP status code that indicates the last page of results.\n\n**Field behavior**\n\n- OPTIONAL: Only needed for APIs with non-standard last page indicators\n- Applies to all pagination methods\n- Overrides the default 404 end-of-pagination detection\n\n**Implementation guidance**\n\nBy default, the system treats a 404 status code as an indicator that\npagination is complete. This field allows you to specify a different\nstatus code if your API uses an alternative convention.\n\nCommon scenarios where this is needed:\n\n1. APIs that return 204 (No Content) for empty result sets\n```\n\"lastPageStatusCode\": 204\n```\n\n2. APIs that return 400 (Bad Request) when requesting beyond available pages\n```\n\"lastPageStatusCode\": 400\n```\n\n3. APIs with custom error codes for pagination completion\n```\n\"lastPageStatusCode\": 499\n```\n\n**Technical details**\n\nWhen this status code is received, the system:\n- Stops the pagination process\n- Considers the data collection complete\n- Does not treat the response as an error\n- Does not attempt to process any response body\n\nIMPORTANT: Only set this if your API explicitly uses a non-404 status code\nto indicate the end of pagination. Setting this incorrectly could cause\npremature termination of data collection or error handling issues.\n"},"lastPagePath":{"type":"string","description":"Specifies a JSON path to a field that indicates the end of pagination.\n\n**Field behavior**\n\n- OPTIONAL: Only needed for APIs with field-based pagination completion signals\n- Works with all pagination methods\n- Used in conjunction with lastPageValues\n- JSON path notation to a field in the response body\n\n**Implementation guidance**\n\nThis field is used when an API indicates the last page through a field\nin the response body rather than using HTTP status codes. The system\nchecks this path in each response to determine if pagination is complete.\n\nCommon patterns include:\n\n1. Boolean flag fields\n```\n\"lastPagePath\": \"meta.isLastPage\"\n```\n\n2. \"Has more\" indicators\n```\n\"lastPagePath\": \"pagination.hasMore\"\n```\n\n3. Cursor/token fields that are null/empty on the last page\n```\n\"lastPagePath\": \"meta.nextCursor\"\n```\n\n4. Error message fields\n```\n\"lastPagePath\": \"error.message\"\n```\n\n**Dependency chain**\n\nThis field must be used with lastPageValues, which specifies the value(s)\nat this path that indicate pagination is complete. For example:\n\n```json\n\"lastPagePath\": \"pagination.hasMore\",\n\"lastPageValues\": [\"false\", \"0\"]\n```\n\nIMPORTANT: The path is evaluated against each response using JSON path notation.\nIf the path doesn't exist in the response, the condition is not considered met.\n"},"lastPageValues":{"type":"array","description":"Specifies which value(s) at the lastPagePath indicate the end of pagination.\n\n**Field behavior**\n\n- REQUIRED when lastPagePath is used\n- Array of string values (even for boolean or numeric comparisons)\n- Case-sensitive matching against the value at lastPagePath\n- Multiple values create an OR condition (any match indicates last page)\n\n**Implementation guidance**\n\nThis field works in conjunction with lastPagePath to determine when\npagination is complete. The system looks for the field specified by\nlastPagePath and compares its value against each entry in this array.\n\nCommon patterns include:\n\n1. For boolean \"isLastPage\" flags (true means last page)\n```json\n\"lastPagePath\": \"meta.isLastPage\",\n\"lastPageValues\": [\"true\"]\n```\n\n2. For \"hasMore\" flags (false means last page)\n```json\n\"lastPagePath\": \"pagination.hasMore\",\n\"lastPageValues\": [\"false\", \"0\"]\n```\n\n3. For empty cursors (null/empty string means last page)\n```json\n\"lastPagePath\": \"meta.nextCursor\",\n\"lastPageValues\": [\"null\", \"\"]\n```\n\n4. For specific error messages\n```json\n\"lastPagePath\": \"error.message\",\n\"lastPageValues\": [\"No more pages\", \"End of results\"]\n```\n\n**Technical details**\n\n- All values must be specified as strings, even for boolean or numeric comparisons\n- JSON null should be represented as the string \"null\"\n- Empty string is represented as \"\"\n- The comparison is exact and case-sensitive\n\nIMPORTANT: This field is only considered when the lastPagePath exists in the\nresponse. Both lastPagePath and lastPageValues must be configured correctly\nfor proper pagination termination.\n","items":{"type":"string"}},"maxPagePath":{"type":"string","description":"Specifies a JSON path to a field containing the total number of pages available.\n\n**Field behavior**\n\n- OPTIONAL: Only relevant for \"page\" and \"skip\" pagination methods\n- JSON path to a numeric field in the response\n- Used to optimize pagination by detecting the last page early\n- Ignored for other pagination methods\n\n**Implementation guidance**\n\nThis field enables pagination optimization when an API includes metadata\nabout the total number of pages. When configured, the system:\n\n1. Extracts the total page count from each response\n2. Compares the current page number against this total\n3. Stops pagination when the maximum page is reached\n\nCommon API response patterns include:\n\n```json\n// Pattern 1: Metadata section with page counts\n{\n  \"data\": [...],\n  \"meta\": {\n    \"totalPages\": 5,\n    \"currentPage\": 2\n  }\n}\n\n// Pattern 2: Pagination object\n{\n  \"results\": [...],\n  \"pagination\": {\n    \"pageCount\": 5,\n    \"page\": 2\n  }\n}\n\n// Pattern 3: Root level pagination info\n{\n  \"items\": [...],\n  \"pages\": 5,\n  \"current\": 2\n}\n```\n\n**Usage scenarios**\n\nMost useful when:\n- The API reliably includes total page counts\n- You want to prevent unnecessary requests after the last page\n- The 404/last page detection mechanisms aren't suitable\n\nIMPORTANT: This field should point to the TOTAL number of pages,\nnot the current page number. The value must be numeric (integer).\n"},"maxCountPath":{"type":"string","description":"Specifies a JSON path to a field containing the total number of records available.\n\n**Field behavior**\n\n- OPTIONAL: Only relevant for \"page\" and \"skip\" pagination methods\n- JSON path to a numeric field in the response\n- Alternative to maxPagePath for record-based termination\n- Used when APIs provide total record count instead of page count\n\n**Implementation guidance**\n\nThis field enables pagination optimization when an API includes metadata\nabout the total number of records rather than pages. When configured,\nthe system:\n\n1. Extracts the total record count from each response\n2. Tracks the total number of records processed so far\n3. Stops pagination when all records have been processed\n\nCommon API response patterns include:\n\n```json\n// Pattern 1: Metadata section with record counts\n{\n  \"data\": [...],\n  \"meta\": {\n    \"totalCount\": 42,\n    \"page\": 2,\n    \"pageSize\": 10\n  }\n}\n\n// Pattern 2: Pagination object\n{\n  \"results\": [...],\n  \"pagination\": {\n    \"total\": 42,\n    \"offset\": 20,\n    \"limit\": 10\n  }\n}\n\n// Pattern 3: Root level count info\n{\n  \"items\": [...],\n  \"count\": 42,\n  \"page\": 2\n}\n```\n\n**Relationship with maxPagePath**\n\nThis field is an alternative to maxPagePath:\n- Use maxPagePath when the API provides a total page count\n- Use maxCountPath when the API provides a total record count\n- If both are provided, maxPagePath takes precedence\n\nIMPORTANT: This field should point to the TOTAL number of records,\nnot the number of records in the current page. The value must be\nnumeric (integer).\n"}}},"response":{"type":"object","description":"Configuration for parsing and interpreting HTTP responses returned by the source API.\n\nThis object tells the export engine how to extract records from the API response body\nand how to detect success or failure at the response level.\n\n**Most important field:** resourcePath\n\n`resourcePath` is the single most commonly needed field in this object. When an API\nwraps its records inside a JSON envelope, you MUST set resourcePath to the dot-path\nthat points to the array of records. Without it, the export treats the entire response\nas a single record.\n\nExample API response:\n```json\n{\n  \"status\": \"ok\",\n  \"data\": {\n    \"customers\": [\n      {\"id\": 1, \"name\": \"Alice\"},\n      {\"id\": 2, \"name\": \"Bob\"}\n    ]\n  }\n}\n```\n→ Set `resourcePath` to `data.customers` so the export produces 2 records.\n\n**When to leave this object undefined**\n\nIf the API returns a bare JSON array (e.g. `[{\"id\":1}, {\"id\":2}]`) with no\nwrapper object, you do not need this object at all.\n","properties":{"resourcePath":{"type":"string","description":"The dot-separated path to the array of records inside the API response body.\n\n**Critical field for correct data extraction**\n\nMost APIs wrap their data in an envelope object. This field tells the export\nwhere to find the actual records within that envelope. Without this field,\nthe export treats the entire response body as a single record, which is\nalmost never the desired behavior when the response has a wrapper.\n\n**How it works**\n\nGiven an API response like:\n```json\n{\n  \"meta\": {\"page\": 1, \"total\": 42},\n  \"results\": [\n    {\"id\": \"A\", \"value\": 10},\n    {\"id\": \"B\", \"value\": 20}\n  ]\n}\n```\nSetting `resourcePath` to `results` causes the export to produce 2 records\n(`{\"id\":\"A\",\"value\":10}` and `{\"id\":\"B\",\"value\":20}`).\n\nFor deeply nested responses:\n```json\n{\n  \"slideshow\": {\n    \"slides\": [{\"title\": \"Slide 1\"}, {\"title\": \"Slide 2\"}]\n  }\n}\n```\nSet `resourcePath` to `slideshow.slides` to get each slide as a record.\n\n**When to set this field**\n\n- The API response is a JSON object (not a bare array) and the records are\n  nested inside it → set this to the path\n- The API response is a bare JSON array → leave undefined (records are\n  already at the top level)\n\n**Common patterns**\n\n| API response structure | resourcePath value |\n|---|---|\n| `{\"data\": [...]}` | `data` |\n| `{\"results\": [...]}` | `results` |\n| `{\"items\": [...]}` | `items` |\n| `{\"records\": [...]}` | `records` |\n| `{\"response\": {\"data\": [...]}}` | `response.data` |\n| `{\"slideshow\": {\"slides\": [...]}}` | `slideshow.slides` |\n| `[...]` (bare array) | leave undefined |\n\n**Important distinction**\n\nThis field extracts records from the **API response**. Do NOT confuse it with:\n- `oneToMany` + `pathToMany` — which unwrap child arrays from *input records*\n  in lookup/import steps (a completely different mechanism)\n- `paging.resourcePath` — which overrides the record location for *subsequent*\n  page requests only (when follow-up pages use a different response structure)\n"},"resourceIdPath":{"type":"string","description":"Path to the unique identifier field within each individual record in the response.\n\nUsed primarily when processing results of asynchronous import responses.\nIf not specified, the system looks for standard `id` or `_id` fields automatically.\n"},"successPath":{"type":"string","description":"Path to a field in the response that indicates whether the API call succeeded.\n\nUse this when the API returns HTTP 200 for all requests but signals success or\nfailure through a field in the response body.\n\nMust be used together with `successValues` to define which values at this path\nindicate success.\n\nExample: If the API returns `{\"status\": \"ok\", \"data\": [...]}`, set\n`successPath` to `status` and `successValues` to `[\"ok\"]`.\n"},"successValues":{"type":"array","items":{"type":"string"},"description":"Values at the `successPath` location that indicate the API call was successful.\n\nWhen the value at `successPath` matches any entry in this array, the response\nis treated as successful. If the value does not match, the response is treated\nas an error.\n\nAll values are compared as strings. For boolean fields, use `\"true\"` or `\"false\"`.\n"},"errorPath":{"type":"string","description":"Path to the error message field in the response body.\n\nUsed to extract a meaningful error message when the API returns an error\nresponse. The value at this path is included in error logs and error records.\n"},"failPath":{"type":"string","description":"Path to a field that identifies a failed response even when the HTTP status code is 200.\n\nSimilar to `successPath` but inverted logic — checks for failure indicators.\nMust be used together with `failValues`.\n"},"failValues":{"type":"array","items":{"type":"string"},"description":"Values at the `failPath` location that indicate the API call failed.\n\nWhen the value at `failPath` matches any entry in this array, the response\nis treated as a failure even if the HTTP status code was 200.\n"},"blobFormat":{"type":"string","description":"Character encoding for blob export responses.\n\nOnly relevant when the export type is \"blob\" (http.type = \"file\" or\nexport type = \"blob\"). Specifies how to decode the binary response body.\n","enum":["utf8","ucs2","utf-16le","ascii","binary","base64","hex"]}}},"_httpConnectorVersionId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector version being used."},"_httpConnectorResourceId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector resource being used."},"sendAuthForFileDownloads":{"type":"boolean","description":"Whether to include authentication headers when downloading files."},"_httpConnectorEndpointId":{"type":"string","format":"objectId","description":"_httpConnectorEndpointId: Identifier for the HTTP connector endpoint used in the integration or API call. This identifier uniquely specifies which HTTP connector endpoint configuration should be utilized to route and process the request within the system. It ensures that API calls are directed to the correct external or internal HTTP service endpoint as defined in the integration setup.\n\n**Field behavior**\n- Uniquely identifies the HTTP connector endpoint within the system to ensure precise routing.\n- Used during the execution of integrations or API calls to select the appropriate HTTP endpoint configuration.\n- Typically required and must be specified when configuring or invoking HTTP-based connectors.\n- Once set, it generally remains immutable to maintain consistent routing behavior.\n- Acts as a key reference to the endpoint’s configuration, including URL, headers, authentication, and other settings.\n\n**Implementation guidance**\n- Must correspond to a valid and existing HTTP connector endpoint identifier within the system.\n- Validate the identifier against the current list of configured HTTP connector endpoints before use to prevent errors.\n- Avoid changing the identifier after initial assignment to prevent routing inconsistencies.\n- Ensure that the endpoint configuration referenced by this ID is active and properly configured.\n- Implement error handling for cases where the identifier does not match any existing endpoint.\n\n**Examples**\n- \"endpoint_12345\"\n- \"httpConnectorEndpoint_abcde\"\n- \"prodHttpEndpoint01\"\n- \"dev_api_gateway_2024\"\n- \"externalServiceEndpoint-01\"\n\n**Important notes**\n- This field is critical for directing API calls to the correct HTTP endpoint; incorrect values can cause failed connections or routing errors.\n- Missing or invalid identifiers will typically result in integration failures or exceptions.\n- Proper permissions and access controls must be in place to use the specified HTTP connector endpoint.\n- Changes to the endpoint configuration referenced by this ID may affect all integrations relying on it.\n- The identifier should be managed carefully in deployment and version control processes.\n\n**Dependency chain**\n- Depends on the existence and configuration of HTTP connector endpoints within the system.\n- May be linked to authentication, authorization, and security settings associated with the endpoint.\n- Integration logic or API call workflows depend on this identifier to resolve the correct HTTP endpoint.\n- Changes in endpoint configurations or identifiers may require updates in dependent integrations.\n\n**Technical details**\n- Data type: String\n- Format: Alphanumeric identifier, typically allowing underscores (_), dashes (-), and possibly other URL-safe characters.\n- Stored as a reference or foreign key linking to the HTTP connector"}}},"Salesforce":{"type":"object","description":"Configuration object for Salesforce data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a Salesforce connection\nand must not be included for other connection types. It defines how data is extracted\nfrom Salesforce, either through queries or real-time events.\n\nFor optimal AI agent implementation, consider these guidelines:\n\n**Salesforce export modes**\n\nSalesforce exports offer two fundamentally different operating modes:\n\n1. **SOQL Query-based Exports** (type=\"soql\")\n    - Scheduled or on-demand batch processing\n    - Uses SOQL queries to retrieve data\n    - Supports both REST and Bulk API\n    - Can be configured as lookups (isLookup=true)\n    - Requires the \"soql\" object with query configuration\n\n2. **Real-time Event Listeners** (type=\"distributed\")\n    - Responds to Salesforce events as they happen\n    - Uses Salesforce's streaming API and platform events\n    - Always appears as a \"Listener\" in the flow builder UI\n    - Requires the \"distributed\" object with event configuration\n\n3. **File/Blob Exports** (when export.type=\"blob\")\n    - Retrieves files stored in Salesforce\n    - Requires sObjectType and id fields\n    - Supports Attachments, ContentVersion, and Document objects\n\n**Implementation requirements**\n\nThe salesforce object has conditional requirements based on the selected type:\n\n- For SOQL exports (type=\"soql\"):\n  Required fields: type, soql.query\n  Optional fields: api, includeDeletedRecords, bulk (when api=\"bulk\")\n\n- For Distributed exports (type=\"distributed\"):\n  Required fields: type, distributed configuration\n  Optional fields: distributed.referencedFields, distributed.qualifier\n\n- For Blob exports (when export.type=\"blob\"):\n  Required fields: sObjectType, id\n","properties":{"type":{"type":"string","description":"Defines the fundamental data extraction method for Salesforce exports.\n\n    **Field behavior**\n\nThis field determines the core operating mode of the Salesforce export:\n\n- REQUIRED for all Salesforce exports\n- Controls which additional configuration objects must be provided\n- Affects how the export appears and functions in the flow builder UI\n- Cannot be changed after creation without significant reconfiguration\n\n**Available types**\n\n**Soql Query-based Export**\n```\n\"type\": \"soql\"\n```\n\n- **Behavior**: Executes SOQL queries against Salesforce on schedule or demand\n- **UI Appearance**: \"Export\" or \"Lookup\" based on isLookup value\n- **Required Config**: Must provide the \"soql\" object with a valid query\n- **Use Cases**: Batch data extraction, delta synchronization, data migration\n- **Dependencies**:\n  - Compatible with both \"rest\" and \"bulk\" API options\n  - Works with standard, delta, test, and once export types\n\n**Real-time Event Listener**\n```\n\"type\": \"distributed\"\n```\n\n- **Behavior**: Listens for real-time Salesforce events (create/update/delete)\n- **UI Appearance**: Always appears as a \"Listener\" in the flow builder\n- **Required Config**: Must provide the \"distributed\" object with event configuration\n- **Use Cases**: Real-time synchronization, event-driven integration\n- **Dependencies**:\n  - Only uses REST API (api field is ignored)\n  - Automatically configured with trigger logic in Salesforce\n  - Only compatible with standard export type (ignores delta/test/once)\n\n**Implementation considerations**\n\nThe type selection creates a fundamental difference in how data flows:\n\n- \"soql\" operates on a pull model where the integration initiates data retrieval\n- \"distributed\" operates on a push model where Salesforce events trigger the integration\n\nIMPORTANT: Choose \"soql\" for batch processing and lookups; choose \"distributed\" for\nreal-time event handling. This decision affects all other configuration aspects.\n","enum":["soql","distributed"]},"sObjectType":{"type":"string","description":"Specifies the Salesforce object type for the export operation.\n\n**Field behavior**\n\nThis field determines which Salesforce object is being exported:\n\n- **REQUIRED** when the parent export's type is \"distributed\"\n- **REQUIRED** when the parent export's type is \"blob\"\n- Optional for \"soql\" exports (can be inferred from the SOQL query)\n- Must be a valid Salesforce object API name\n\n**Use cases by export type**\n\n**Distributed Exports**\n```\n\"sObjectType\": \"Account\"\n\"sObjectType\": \"Contact\"\n\"sObjectType\": \"Opportunity\"\n\"sObjectType\": \"Custom_Object__c\"\n```\n\n- **Purpose**: Specifies the primary object type being exported from Salesforce\n- **Valid Values**: Any standard or custom Salesforce object (Account, Contact, Opportunity, Lead, Case, Custom_Object__c, etc.)\n- **API Access**: Uses the specified object's metadata and SOQL/REST APIs\n- **Use Cases**: Real-time distributed processing of Salesforce records\n- **Requirements**: Object must exist in the connected Salesforce org\n\n**Blob/File Exports**\n```\n\"sObjectType\": \"Attachment\"\n\"sObjectType\": \"ContentVersion\"\n\"sObjectType\": \"Document\"\n```\n\n- **Purpose**: Specifies which Salesforce file storage object contains the file data\n- **Valid Values**: File storage objects only (Attachment, ContentVersion, Document)\n- **API Access**: Uses file-specific APIs for data retrieval\n- **Use Cases**: Extracting files and binary data from Salesforce\n- **Requirements**: Must be used with the \"id\" field to specify the file record\n\n**Soql Exports**\n```\n\"sObjectType\": \"Account\"  // Optional - can be inferred from query\n```\n\n- **Purpose**: Optional hint about the primary object in the SOQL query\n- **Valid Values**: Any Salesforce object referenced in the query\n- **Use Cases**: Query optimization and metadata context\n\n**Implementation notes**\n\nFor distributed exports, this field is essential for:\n- Setting up proper event listeners and triggers\n- Configuring field metadata and validation\n- Enabling related object processing\n- Determining appropriate API endpoints\n\nFor blob exports, this field works with the \"id\" field to retrieve specific file records.\n\nIMPORTANT: The object specified must exist in the target Salesforce org and be accessible\nto the integration user account.\n"},"id":{"type":"string","description":"Specifies the Salesforce record ID of the file to retrieve for blob exports.\n\n**Field behavior**\n\nThis field identifies the specific file record in Salesforce:\n\n- REQUIRED when the parent export's type is \"blob\"\n- Must be a valid Salesforce ID or a handlebars expression\n- Used in conjunction with sObjectType to retrieve the file\n- Not used for regular data exports (type=\"soql\" or \"distributed\")\n\n**Implementation patterns**\n\n**Static File id**\n```\n\"id\": \"00P5f00000ZQcTZEA1\"\n```\n\n- References a specific, fixed file in Salesforce\n- Useful for retrieving standard documents or templates\n- Always retrieves the same file on each execution\n- Simple to configure but lacks flexibility\n\n**Dynamic File id (Handlebars)**\n```\n\"id\": \"{{record.Attachment_Id__c}}\"\n```\n\n- References a file ID from input data using handlebars\n- Requires the export to be used as a lookup (isLookup=true)\n- Dynamically determines which file to retrieve at runtime\n- Allows for contextual file retrieval based on previous steps\n\n**Technical details**\n\n- For ContentVersion objects, this should be the ContentVersion ID\n- For Attachment objects, this should be the Attachment ID\n- For Document objects, this should be the Document ID\n\nIMPORTANT: Salesforce IDs are 15 or 18 characters, case-sensitive for 15-character\nversions, and case-insensitive for 18-character versions. When using handlebars,\nensure the referenced field contains a valid Salesforce ID.\n"},"includeDeletedRecords":{"type":"boolean","description":"Controls whether the export retrieves records from the Salesforce Recycle Bin.\n\n**Field behavior**\n\nThis field enables access to recently deleted records:\n\n- OPTIONAL: Defaults to false if not specified\n- Only relevant for SOQL exports (type=\"soql\")\n- Ignored for distributed exports and blob exports\n- Changes the underlying API method used for queries\n\n**Implementation impact**\n\nWhen set to true:\n- Salesforce's queryAll() API method is used instead of query()\n- Records in the Recycle Bin (deleted within the past 15 days) are included\n- Each record contains an \"IsDeleted\" field to identify deleted status\n- API usage may be higher as queryAll() counts differently against limits\n\n**Use cases**\n\nThis field is particularly useful for:\n\n- Synchronizing deletion operations to target systems\n- Building data recovery/rollback mechanisms\n- Maintaining a complete audit trail including deleted records\n- Implementing soft-delete patterns across integrated systems\n\n**Technical considerations**\n\n- Records in the Recycle Bin are only available for up to 15 days\n- Hard-deleted records (emptied from Recycle Bin) are not accessible\n- The IsDeleted field should be checked to identify deleted records\n- May increase response size and processing time slightly\n\nIMPORTANT: This feature only works with SOQL exports (type=\"soql\") and is ignored\nfor distributed exports (type=\"distributed\") since those operate on events rather\nthan queries.\n","default":false},"api":{"type":"string","description":"Specifies which Salesforce API to use for retrieving data.\n\n**Field behavior**\n\nThis field controls the underlying API technology:\n\n- OPTIONAL: Defaults to \"rest\" if not specified\n- Only relevant for SOQL exports (type=\"soql\")\n- Ignored for distributed exports and blob exports\n- Determines performance characteristics and compatibility\n\n**Available APIs**\n\n**Rest api**\n```\n\"api\": \"rest\"\n```\n\n- **Performance**: Optimized for immediate response and smaller datasets\n- **Concurrency**: Higher - multiple queries can run simultaneously\n- **Data Volume**: Best for <10,000 records\n- **Use Cases**: Lookups, real-time queries, smaller datasets\n- **Special Features**: Required for lookup exports (isLookup=true)\n\n**Bulk api 2.0**\n```\n\"api\": \"bulk\"\n```\n\n- **Performance**: Optimized for large data volumes, higher throughput\n- **Concurrency**: Lower - utilizes a job queuing system\n- **Data Volume**: Best for >=10,000 records\n- **Use Cases**: Large data migrations, full dataset exports, reports\n- **Special Features**: Requires \"bulk\" object configuration for settings\n\n**Dependencies and constraints**\n\n- When isLookup=true, api must be set to \"rest\" (or left as default)\n- When api=\"bulk\", the bulk object can be configured for additional options\n- Bulk API introduces slight processing latency but handles larger volumes\n- REST API provides immediate results but may time out with very large queries\n\n**Selection guidance**\n\nChoose based on your data volume and response time needs:\n\n- For smaller datasets (<10,000 records) or lookups: use \"rest\"\n- For larger datasets or background processing: use \"bulk\"\n- When immediacy is critical: use \"rest\"\n- When throughput is critical: use \"bulk\"\n\nIMPORTANT: The Bulk API is not compatible with lookup exports (isLookup=true).\nIf your export is configured as a lookup, you must use the REST API.\n","enum":["rest","bulk"]},"bulk":{"type":"object","description":"Configuration parameters for Salesforce Bulk API 2.0 exports.\n\n**Field behavior**\n\nThis object contains settings specific to Bulk API operations:\n\n- REQUIRED when api=\"bulk\" and type=\"soql\"\n- Ignored when api=\"rest\" or type=\"distributed\"\n- Controls behavior of Salesforce Bulk API jobs\n- Provides optimization options for large data volumes\n\n**Implementation context**\n\nThe Bulk API operates differently from REST API:\n- Creates asynchronous jobs in Salesforce\n- Processes records in batches for higher throughput\n- Optimized for transferring large datasets\n- Has different governor limits and behavior\n","properties":{"maxRecords":{"type":"integer","description":"Specifies the maximum number of records to retrieve in a single Bulk API job.\n\n**Field behavior**\n\nThis field controls query result size:\n\n- OPTIONAL: Uses Salesforce's default if not specified\n- Sets the `maxRecords` parameter on Bulk API requests\n- Only applicable when api=\"bulk\" and type=\"soql\"\n- Helps prevent timeouts with complex queries or large record sizes\n\n**Technical considerations**\n\n- Different Salesforce editions have different limits\n- Values too high may cause timeouts with complex records\n- Values too low may require multiple API calls\n- Standard objects typically support higher limits than custom objects\n\n**Optimization guidance**\n\n- For simple records (few fields): Higher values improve throughput\n- For complex records (many fields): Lower values prevent timeouts\n- For standard objects: 50,000 is usually safe\n- For custom objects: 10,000-25,000 is recommended\n\nIMPORTANT: The Salesforce Bulk API 2.0 has a hard limit of 100 million records\nper job, but practical limits are typically much lower based on record complexity\nand Salesforce instance capacity.\n","minimum":10000},"purgeJobAfterExport":{"type":"boolean","description":"Controls whether Bulk API jobs are automatically deleted after completion.\n\n**Field behavior**\n\nThis field manages job cleanup:\n\n- OPTIONAL: Defaults to false if not specified\n- When true, deletes the Bulk API job after all data is retrieved\n- Only applicable when api=\"bulk\" and type=\"soql\"\n- Has no effect on the actual data retrieval or results\n\n**Implementation impact**\n\nWhen enabled (true):\n- Reduces clutter in the Salesforce Bulk Data Load Jobs UI\n- Prevents accumulation of completed jobs\n- May help stay under job retention limits\n- Makes job details unavailable for later troubleshooting\n\nWhen disabled (false):\n- Preserves job history for troubleshooting\n- Allows reviewing job details in Salesforce\n- May accumulate many jobs over time\n\n**Best practices**\n\n- For production environments: Set to true for cleanliness\n- For testing/development: Set to false for easier debugging\n- For audit-heavy environments: Set to false if job history is needed\n\nIMPORTANT: This setting only affects job metadata cleanup in Salesforce.\nIt has no impact on the actual data retrieved or the success of the export.\n"}}},"soql":{"type":"object","description":"Configuration for SOQL query-based Salesforce exports.\n\n**Field behavior**\n\nThis object contains the SOQL query settings:\n\n- REQUIRED when type=\"soql\"\n- Not used when type=\"distributed\" or for blob exports\n- Controls what data is retrieved from Salesforce\n- Works with both REST API and Bulk API methods\n\n**Implementation requirements**\n\nThe soql object must include a valid query that follows Salesforce SOQL syntax.\nThe query determines:\n- Which objects are accessed\n- Which fields are retrieved\n- What filtering conditions are applied\n- How results are sorted and limited\n","properties":{"query":{"type":"string","description":"The SOQL query that defines what data to retrieve from Salesforce.\n\n**Field behavior**\n\nThis field contains the actual SOQL statement:\n\n- REQUIRED when type=\"soql\"\n- Must follow Salesforce Object Query Language syntax\n- Passed directly to Salesforce API endpoints\n- Can include dynamic values via handlebars\n\n**Query structure elements**\n\nA complete SOQL query typically includes:\n\n**Field Selection**\n```\nSELECT Id, Name, Email, Phone, Account.Name\n```\n- List specific fields to retrieve\n- Include relationship fields using dot notation\n- Use * sparingly (only with specific sObjects that support it)\n\n**Object Selection**\n```\nFROM Contact\n```\n- Specifies the Salesforce object to query\n- Must be a valid API name (not label)\n- Case-sensitive (match Salesforce API names exactly)\n\n**Filter Conditions**\n```\nWHERE LastModifiedDate > {{lastExportDateTime}}\nAND IsActive = true\n```\n- Limits which records are returned\n- Can reference handlebars variables (e.g., for delta exports)\n- Supports standard operators (=, !=, >, <, LIKE, IN, etc.)\n\n**Relationship Queries**\n```\nSELECT Account.Id, (SELECT Id, FirstName FROM Contacts)\nFROM Account\n```\n- Retrieves parent and child records in a single query\n- Helps reduce API calls for related data\n- Supports both lookup and master-detail relationships\n\n**Implementation best practices**\n\n- Select only the fields you need (improves performance)\n- Use WHERE clauses to limit data volume\n- For delta exports, use LastModifiedDate with {{lastExportDateTime}}\n- Use ORDER BY for consistent results across multiple pages\n- Avoid SOQL functions in filters when using Bulk API\n\n**Technical limits**\n\n- Maximum query length: 20,000 characters\n- Maximum relationships traversed: 5 levels\n- Maximum subquery levels: 1 (no nested subqueries)\n- Maximum batch size varies by API (REST: 2,000, Bulk: 10,000+)\n\nIMPORTANT: When using relationship queries, child objects count against\ngovernor limits differently. For bulk processing of many parent-child records,\nconsider separate queries or the oneToMany export setting.\n","maxLength":200000}}},"distributed":{"type":"object","description":"Configuration for real-time Salesforce event-driven exports.\n\n**Field behavior**\n\nThis object defines real-time event listener settings:\n\n- REQUIRED when type=\"distributed\"\n- Not used when type=\"soql\" or for blob exports\n- Creates push-based integration triggered by Salesforce events\n- Implements real-time processing of creates, updates, and deletes\n\n**Implementation context**\n\nDistributed exports work fundamentally differently from SOQL exports:\n- No scheduling or manual execution required\n- Triggered automatically when records change in Salesforce\n- Data flows in real-time as events occur\n- Uses Salesforce's platform events and streaming API\n\n**Technical architecture**\n\nWhen configured, the system:\n1. Creates custom triggers in the connected Salesforce org\n2. Establishes event listeners for the specified objects\n3. Processes events as they occur (create/update/delete operations)\n4. Delivers the changed records to the integration flow\n","properties":{"referencedFields":{"type":"array","description":"Specifies additional fields to retrieve from related objects via relationships.\n\n**Field behavior**\n\nThis field extends the data retrieval beyond the primary object:\n\n- OPTIONAL: If omitted, only direct fields are retrieved\n- Each entry specifies a field on a related object using dot notation\n- Values are included in the exported record data\n- Only works with lookup and master-detail relationships\n\n**Implementation patterns**\n\n**Parent Object Fields**\n```\n[\"Account.Name\", \"Account.Industry\", \"Account.BillingCity\"]\n```\n- Retrieves fields from parent objects\n- Useful for including context from parent records\n- Common for child objects like Contacts, Opportunities\n\n**User/Owner Fields**\n```\n[\"Owner.Email\", \"CreatedBy.Name\", \"LastModifiedBy.Username\"]\n```\n- Retrieves fields from standard user relationship fields\n- Provides attribution information\n- Useful for auditing and notification scenarios\n\n**Custom Relationship Fields**\n```\n[\"Custom_Lookup__r.Field_Name__c\", \"Another_Relation__r.Status__c\"]\n```\n- Works with custom relationship fields\n- Uses __r suffix for the relationship name\n- Can access standard or custom fields on the related object\n\n**Technical considerations**\n\n- Maximum 10 unique referenced relationships per export\n- Each referenced field counts against Salesforce API limits\n- Fields must be accessible to the connected user\n- Performance impact increases with each additional relationship\n\nIMPORTANT: Referenced fields are retrieved via separate API calls,\nwhich can impact performance with large numbers of records or relationships.\nOnly include fields that are actually needed by your integration.\n","items":{"type":"string"}},"disabled":{"type":"boolean","description":"Controls whether this real-time event listener is active.\n\n**Field behavior**\n\nThis field enables/disables event processing:\n\n- OPTIONAL: Defaults to false if not specified\n- When true, prevents the export from processing any events\n- Preserves configuration while temporarily stopping execution\n- Can be toggled without removing the entire export\n\n**Use cases**\n\nThis field is particularly useful for:\n\n- Temporarily pausing real-time integration during maintenance\n- Testing event configuration without processing\n- Creating standby event handlers for disaster recovery\n- Controlling traffic during peak business periods\n\n**Implementation notes**\n\nWhen disabled (true):\n- Events are NOT queued - they are completely ignored\n- No data will flow through this export\n- The Salesforce triggers remain in place but are inactive\n- No impact on Salesforce performance or API limits\n\nIMPORTANT: When disabled, events that occur will NOT be processed retroactively\nwhen re-enabled. Consider using a delta export for catching up on missed changes\nafter extended disabled periods.\n"},"qualifier":{"type":"string","description":"A filter expression that determines which Salesforce events are processed.\n\n**Field behavior**\n\nThis field provides server-side filtering:\n\n- OPTIONAL: If omitted, all events for the object are processed\n- Uses Salesforce formula syntax for filtering\n- Evaluated before events are sent to the integration platform\n- Can reference any field on the triggering record\n\n**Implementation patterns**\n\n**Simple Field Comparisons**\n```\n\"Status__c = 'Approved'\"\n```\n- Processes events only when specific field values match\n- Most efficient filtering approach\n- Can use =, !=, >, <, >=, <= operators\n\n**Logical Conditions**\n```\n\"Amount > 1000 AND Status__c = 'New'\"\n```\n- Combines multiple conditions with AND, OR operators\n- Can use parentheses for complex grouping\n- Allows precise control over which events trigger the integration\n\n**Formula Functions**\n```\n\"CONTAINS(Description, 'Priority') OR ISCHANGED(Status__c)\"\n```\n- Uses Salesforce formula functions\n- ISCHANGED detects specific field modifications\n- ISNEW, ISDELETED detect record lifecycle events\n\n**Performance impact**\n\nThe qualifier is evaluated in Salesforce before sending events:\n- Reduces network traffic and processing\n- Lowers integration platform load\n- More efficient than filtering in a subsequent flow step\n- No additional API calls required\n\nIMPORTANT: The qualifier is evaluated using the Salesforce formula engine.\nUse valid Salesforce formula syntax and reference only fields that exist\non the primary object being monitored.\n"},"batchSize":{"type":"integer","description":"Controls how many records are processed together in each real-time batch.\n\n**Field behavior**\n\nThis field affects event processing efficiency:\n\n- OPTIONAL: Uses system default if not specified\n- Valid range: 4 to 200 records per batch\n- Affects how events are grouped before processing\n- Balance between latency and throughput\n\n**Performance considerations**\n\n**Smaller Batch Sizes (4-20)**\n```\n\"batchSize\": 10\n```\n- Lower latency - events processed more immediately\n- More overhead for small numbers of records\n- Better for time-sensitive operations\n- More resilient for complex record processing\n\n**Larger Batch Sizes (50-200)**\n```\n\"batchSize\": 100\n                    - Higher throughput - better efficiency for many records\n- Slight increase in processing delay\n- Better for high-volume operations\n- More efficient use of API calls and resources\n\n**Implementation guidance**\n\nChoose based on your volume and timing requirements:\n\n- For high-volume objects (many changes per minute): Use larger batches\n- For time-sensitive operations: Use smaller batches\n- For complex processing logic: Use smaller batches\n- For efficiency and throughput: Use larger batches\n\nIMPORTANT: The batch size doesn't limit how many records can be processed\nin total, only how they're grouped for processing. All events will eventually\nbe processed regardless of batch size.\n","minimum":4,"maximum":200},"skipExportFieldId":{"type":"string","description":"Specifies a boolean field that prevents integration loops in bidirectional sync.\n\n**Field behavior**\n\nThis field provides a loop prevention mechanism:\n\n- OPTIONAL: If omitted, no loop prevention is applied\n- Must reference a valid boolean/checkbox field on the object\n- Field must be updateable via the Salesforce API\n- System automatically manages the field's value\n\n**Implementation mechanism**\n\nThe loop prevention works as follows:\n\n1. When your integration updates a record in Salesforce\n2. The system temporarily sets this field to true\n3. The update triggers Salesforce's normal event system\n4. But events where this field is true are ignored\n5. The system automatically clears the field afterward\n\n**Use cases**\n\nThis field is critical for:\n\n- Bidirectional synchronization scenarios\n- Preventing infinite update loops\n- Implementing changes that flow both ways\n- Distinguishing between user changes and integration changes\n\n**Field requirements**\n\nThe field you specify must be:\n- A checkbox (Boolean) field in Salesforce\n- Created specifically for integration purposes\n- Not used by other business processes\n- Updateable by the integration user\n\nIMPORTANT: For bidirectional sync scenarios, this field is required.\nWithout it, updates from your integration would trigger events that\ncould create infinite loops between systems.\n"},"relatedLists":{"type":"array","description":"Configuration for retrieving child records related to the primary object.\n\n**Field behavior**\n\nThis field enables parent-child data synchronization:\n\n- OPTIONAL: If omitted, only the primary record is processed\n- Each array entry configures one related list/child object\n- Child records are included with their parent in the payload\n- Automatically retrieves child records when parent changes\n\n**Implementation context**\n\nThis feature allows you to:\n- Synchronize complete object hierarchies in real-time\n- Include child records when a parent record changes\n- Process parent-child data together in a single flow\n- Maintain relationships between objects across systems\n\n**Technical impact**\n\n- Each related list requires additional Salesforce API calls\n- Performance impact increases with each related list\n- Data volume can increase significantly with many children\n- Parent-child structures may require special handling in flows\n","items":{"type":"object","description":"Configuration for a single related list (child object) to include.\n\nEach object in this array defines how to retrieve one type of\nchild records related to the primary object. Multiple related lists\ncan be configured to retrieve different types of children.\n","properties":{"referencedFields":{"type":"array","description":"Specifies which fields to retrieve from the child records.\n\n**Field behavior**\n\nThis field selects child record fields:\n\n- REQUIRED for each related list configuration\n- Must contain valid API field names for the child object\n- Only listed fields will be retrieved from child records\n- Empty array will retrieve only Id field\n\n**Implementation guidance**\n\n- Include only fields needed by your integration\n- Always include key identifier fields\n- Consider relationship fields if needed\n- Balance between completeness and performance\n\nIMPORTANT: Each field increases data volume and processing time.\nOnly include fields that your integration actually needs to process.\n","items":{"type":"string"}},"parentField":{"type":"string","description":"Specifies the field on the child object that relates back to the parent.\n\n**Field behavior**\n\nThis field identifies the relationship:\n\n- REQUIRED for each related list configuration\n- Must be a lookup or master-detail field on the child object\n- References the parent object being exported\n- Used to construct the relationship query\n\n**Relationship field patterns**\n\n**Standard Relationships**\n```\n\"parentField\": \"AccountId\"\n```\n- For standard parent-child relationships\n- Field name typically ends with \"Id\"\n- References standard objects\n\n**Custom Relationships**\n```\n\"parentField\": \"Parent_Object__c\"\n```\n- For custom parent-child relationships\n- Field name typically ends with \"__c\"\n- References custom objects\n\n**Technical details**\n\nThe system uses this field to construct a query like:\n```\nSELECT [referencedFields] FROM [sObjectType]\nWHERE [parentField] = [parent record Id]\n```\n\nIMPORTANT: This must be the exact API name of the field on the child\nobject that creates the relationship to the parent, not the relationship\nname itself.\n"},"sObjectType":{"type":"string","description":"Specifies the API name of the child object to retrieve.\n\n**Field behavior**\n\nThis field identifies the child object type:\n\n- REQUIRED for each related list configuration\n- Must be a valid Salesforce API object name\n- Case-sensitive (match Salesforce naming exactly)\n- Can be standard or custom object\n\n**Object name patterns**\n\n**Standard Objects**\n```\n\"sObjectType\": \"Contact\"\n```\n- Standard Salesforce objects\n- No namespace or suffix\n- First letter capitalized\n\n**Custom Objects**\n```\n\"sObjectType\": \"Custom_Object__c\"\n```\n- Custom Salesforce objects\n- API name with \"__c\" suffix\n- Case-sensitive, including underscores\n\n**Relationship compatibility**\n\nThe sObjectType must:\n- Have a relationship field to the parent object\n- Be accessible to the connected user\n- Support standard SOQL queries\n\nIMPORTANT: Use the exact API name of the object, not its label.\nThis value is case-sensitive and must match Salesforce's naming exactly.\n"},"filter":{"type":"string","description":"Optional SOQL WHERE clause to filter which child records are included.\n\n**Field behavior**\n\nThis field adds filtering to child record retrieval:\n\n- OPTIONAL: If omitted, all related child records are included\n- Contains only the condition expression (without \"WHERE\" keyword)\n- Uses standard SOQL syntax for conditions\n- Applied in addition to the parent relationship filter\n\n**Filtering patterns**\n\n**Simple Condition**\n```\n\"filter\": \"IsActive = true\"\n```\n- Basic field comparison\n- Only active related records are included\n\n**Multiple Conditions**\n```\n\"filter\": \"Status__c = 'Open' AND Priority = 'High'\"\n```\n- Combined conditions with logical operators\n- Only records matching all conditions are included\n\n**Complex Filtering**\n```\n\"filter\": \"CreatedDate > LAST_N_DAYS:30 OR IsClosed = false\"\n```\n- Can use Salesforce date literals and functions\n- Can mix different types of conditions\n\n**Technical details**\n\nThe system appends this to the automatically generated relationship query:\n```\nSELECT [fields] FROM [sObjectType]\nWHERE [parentField] = [parent ID] AND ([filter])\n```\n\nIMPORTANT: Do not include the \"WHERE\" keyword in this field.\nOnly include the condition expression itself, as it will be combined\nwith the parent relationship condition automatically.\n"},"orderBy":{"type":"string","description":"Optional SOQL ORDER BY clause to sort the child records.\n\n**Field behavior**\n\nThis field controls child record ordering:\n\n- OPTIONAL: If omitted, order is determined by Salesforce\n- Contains only field and direction (without \"ORDER BY\" keywords)\n- Uses standard SOQL syntax for sorting\n- Applied to the child records query\n\n**Ordering patterns**\n\n**Single Field Ascending (Default)**\n```\n\"orderBy\": \"Name\"\n```\n- Sorts by a single field in ascending order\n- ASC is implied if not specified\n\n**Single Field Descending**\n```\n\"orderBy\": \"CreatedDate DESC\"\n```\n- Sorts by a single field in descending order\n- Must explicitly specify DESC\n\n**Multiple Fields**\n```\n\"orderBy\": \"Priority DESC, CreatedDate ASC\"\n```\n- Sorts by multiple fields in specified directions\n- Comma-separated list of fields with optional directions\n\n**Technical details**\n\nThe system appends this to the automatically generated relationship query:\n```\nSELECT [fields] FROM [sObjectType]\nWHERE [parentField] = [parent ID]\nORDER BY [orderBy]\n```\n\nIMPORTANT: Do not include the \"ORDER BY\" keywords in this field.\nOnly include the field names and sort directions, as they will be\nadded to the query with the proper syntax automatically.\n"}}}}}}}},"AS2":{"type":"object","description":"Configuration for AS2 (Applicability Statement 2) exports and listeners.\n\n**What is AS2?**\n\nApplicability Statement 2 (AS2) is a widely adopted protocol for securely and reliably transmitting\nEDI and other data types over the internet using HTTP/S, S/MIME encryption, and digital signatures.\nAS2 provides:\n\n- **Message integrity** through digital signature validation\n- **Confidentiality** via encryption with X.509 certificates\n- **Non-repudiation** via Message Disposition Notifications (MDNs)\n\n**As2 export configuration**\n\nIMPORTANT: When the _connectionId field points to a connection where the type is as2,\nthis object MUST be populated for the export to function properly. This is a required configuration\nfor all AS2 based exports, as determined by the connection associated with the export.\n\n**As2 listener functionality**\n\nAn AS2 listener is a flow step in Celigo designed to receive incoming AS2 transmissions\nand deliver them into a defined integration flow. It acts as the \"source\" of a flow—similar to\nhow a webhook listener works—except it specifically handles AS2 protocol requirements, including\ndecryption, signature verification, and MDN generation.\n\nUnlike periodic polling or scheduled exports, an AS2 listener functions in near real-time—when\na trading partner pushes an AS2 message, Celigo's listener step processes it instantly,\ngenerating an MDN in response to acknowledge receipt. This ensures low-latency, event-driven\nprocessing where each inbound AS2 transmission triggers the integration flow automatically.\n","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"Reference to a TradingPartnerConnector document.\n\n**Trading partner connector overview**\n\nA Trading Partner Connector in Celigo's integrator.io is a prebuilt, partner-specific integration\ntemplate that streamlines the setup and management of Electronic Data Interchange (EDI) transactions\nwith a designated trading partner. It encapsulates all requisite configurations:\n\n- Communication protocol (e.g., AS2, FTP/SFTP, VAN)\n- Document schemas (such as ANSI X12 or EDIFACT)\n- Mappings\n- Validation rules\n- Endpoint details\n\n**Benefits**\n\nBy referencing a Trading Partner Connector through this field, organizations:\n\n- Reduce manual setup time\n- Ensure compliance with specific partner requirements\n- Take advantage of Celigo's out-of-the-box EDI capabilities\n- Process transactions reliably and securely\n- Onboard new partners rapidly without building flows from scratch\n\nThis field is crucial for AS2 configurations as it links the export to all partner-specific\nsettings required for successful AS2 communication.\n"},"blob":{"type":"boolean","description":"- **Behavior**: Retrieves raw files without parsing them into structured data records.  Should only be used when the contents of the file will not be used in subsequent steps.\n- **UI Appearance**: \"Transfer\" flow step\n- **Required Config**: Configuration only available on AS2 and VAN exports (as2.blob = true)\n- **Use Case**: Raw file transfers for binary files or when parsing is handled downstream\n- **Important Note**: Use this when you want to handle the file as a raw blob without automatic parsing\n"}},"required":[]},"DynamoDB":{"type":"object","description":"Configuration object for Amazon DynamoDB data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a DynamoDB connection\nand must not be included for other connection types. It defines how data is extracted\nfrom DynamoDB tables, using query operations against NoSQL data structures.\n\n**Implementation requirements**\n\nThe DynamoDB object has the following requirements:\n\n- For basic exports:\n  Required fields: region, method, tableName, expressionAttributeNames, expressionAttributeValues, keyConditionExpression\n  Optional fields: filterExpression, projectionExpression\n\n- For Once exports (when export.type=\"once\"):\n  Additional required field: onceExportPartitionKey\n  Optional field: onceExportSortKey (for composite keys)\n","properties":{"region":{"type":"string","enum":["us-east-1","us-east-2","us-west-1","us-west-2","af-south-1","ap-east-1","ap-south-1","ap-northeast-1","ap-northeast-2","ap-northeast-3","ap-southeast-1","ap-southeast-2","ca-central-1","eu-central-1","eu-west-1","eu-west-2","eu-west-3","eu-south-1","eu-north-1","me-south-1","sa-east-1"],"description":"Specifies the AWS region where the DynamoDB table is located.\n\n**Field behavior**\n\nThis field determines where to connect to DynamoDB:\n\n- REQUIRED for all DynamoDB exports\n- Must match the region where your DynamoDB table is deployed\n- Select the same AWS region used in your database configuration\n- Ensures the integration can access your table\n","default":"us-east-1"},"method":{"type":"string","enum":["query"],"description":"Defines the DynamoDB operation method used to retrieve data.\n\n**Field behavior**\n\n- REQUIRED for all DynamoDB exports\n- Currently only supports \"query\" operations\n- Always set this value to \"query\"\n- Additional methods may be supported in future versions\n"},"tableName":{"type":"string","description":"Specifies the DynamoDB table from which to retrieve data.\n\n**Field behavior**\n\nThis field identifies the data source:\n\n- REQUIRED for all DynamoDB exports\n- Must be an exact match to an existing table name\n- Case-sensitive as per AWS naming conventions\n- Cannot be changed without recreating the export\n\n**Implementation patterns**\n\n**Standard Table Names**\n```\n\"tableName\": \"Customers\"\n```\n"},"keyConditionExpression":{"type":"string","description":"Defines the search criteria to determine which items to retrieve from DynamoDB.\n\n**Field behavior**\n\n- REQUIRED when method=\"query\"\n- Must include a condition on the partition key\n- Can optionally include conditions on the sort key\n- Uses placeholders defined in expressionAttributeNames and expressionAttributeValues\n\n**Common patterns**\n\n```\n\"#pk = :pkValue\"                                  // Partition key only\n\"#pk = :pkValue AND #sk = :skValue\"               // Exact match on partition and sort key\n\"#pk = :pkValue AND #sk BETWEEN :start AND :end\"  // Range query on sort key\n\"#pk = :pkValue AND begins_with(#sk, :prefix)\"    // Prefix match on sort key\n```\n\nPlaceholders with '#' reference attribute names, while ':' reference values.\n"},"filterExpression":{"type":"string","description":"Filters the results from a query based on non-key attributes.\n\n**Field behavior**\n\n- OPTIONAL: If omitted, all items matching the key condition are returned\n- Applied after the key condition but before returning results\n- Can reference any non-key attributes to further refine results\n- Uses placeholders defined in expressionAttributeNames and expressionAttributeValues\n\n**Examples**\n\n```\n\"#status = :active\"\n\"#status = :active AND #price > :minPrice\"\n\"contains(#tags, :tagValue)\"\n```\n\nRefer to the DynamoDB documentation for the complete list of valid operators and syntax.\n"},"projectionExpression":{"type":"array","items":{"type":"string"},"description":"Specifies which fields to return from each item in the results.\n\n**Field behavior**\n\n- OPTIONAL: If omitted, all fields are returned\n- Each array element represents a field to include\n- References attribute names defined in expressionAttributeNames\n- Reduces data transfer by returning only needed fields\n\n**Examples**\n\n```\n[\"#id\", \"#name\", \"#email\"]               // Basic fields\n[\"#id\", \"#profile.#firstName\"]           // Nested fields\n[\"#id\", \"#items[0]\", \"#items[1]\"]        // List elements\n```\n\nRefer to the DynamoDB documentation for more details on projection syntax.\n"},"expressionAttributeNames":{"type":"string","description":"Defines placeholders for attribute names used in expressions.\n\n**Field behavior**\n\n- REQUIRED when using expressions that reference attribute names\n- Must be a valid JSON string mapping placeholders to actual attribute names\n- Each placeholder must begin with a pound sign (#) followed by alphanumeric characters\n- Used in keyConditionExpression, filterExpression, and projectionExpression\n\n**Example**\n\n```\n\"{\\\"#pk\\\": \\\"customerId\\\", \\\"#status\\\": \\\"status\\\"}\"\n```\n\nThis maps the placeholder #pk to the actual attribute name \"customerId\" and #status to \"status\".\n\nRefer to the DynamoDB documentation for more details.\n"},"expressionAttributeValues":{"type":"string","description":"Defines placeholder values used in expressions for comparison.\n\n**Field behavior**\n\n- REQUIRED when using expressions that compare attribute values\n- Must be a valid JSON string mapping placeholders to actual values\n- Each placeholder must begin with a colon (:) followed by alphanumeric characters\n- Used in keyConditionExpression and filterExpression\n- Can contain static values or dynamic values with handlebars syntax\n\n**Example**\n\n```\n\"{\\\":customerId\\\": \\\"12345\\\", \\\":status\\\": \\\"ACTIVE\\\"}\"\n```\n\nThis maps the placeholder :customerId to the value \"12345\" and :status to \"ACTIVE\".\n\nRefer to the DynamoDB documentation for more details.\n"},"onceExportPartitionKey":{"type":"string","description":"Specifies the partition key attribute for identifying items in once exports.\n\n**Field behavior**\n\n- REQUIRED when export.type=\"once\"\n- Must specify the primary key that uniquely identifies each item in the table\n- Celigo uses this to track which items have been processed\n- After successful export, Celigo updates a tracking field in the database\n\nThis is needed for once exports to prevent duplicate processing of the same items\nin subsequent runs by marking them as processed.\n\nRefer to the DynamoDB documentation for more details on partition keys.\n"},"onceExportSortKey":{"type":"string","description":"Specifies the sort key attribute for identifying items in composite key tables.\n\n**Field behavior**\n\n- OPTIONAL: Only needed for tables with composite primary keys\n- Used together with onceExportPartitionKey for tables where items are identified by both keys\n- Celigo uses both keys to uniquely identify items that have been processed\n- For tables with only a partition key (simple primary key), leave this empty\n\nThis is only required if your DynamoDB table uses a composite primary key\n(partition key + sort key) to uniquely identify items.\n\nRefer to the DynamoDB documentation for more details on sort keys.\n"}}},"FTP":{"type":"object","description":"Configuration object for FTP/SFTP connection settings in export integrations.\n\nThis object is REQUIRED when the _connectionId field references an FTP/SFTP connection\nand must not be included for other connection types. It defines how to locate and retrieve\nfiles from FTP, FTPS, or SFTP servers.\n\nThe FTP export object has the following requirements:\n\n- Required fields: directoryPath\n- Optional fields: fileNameStartsWith, fileNameEndsWith, backupDirectoryPath, _tpConnectorId\n\n**Purpose**\n\nThis configuration specifies:\n- Which directory to retrieve files from\n- How to filter files by name patterns\n- Where to move files after retrieval (optional)\n- Any trading partner-specific connection settings\n","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"References a Trading Partner Connector for standardized B2B integrations.\n\n**Field behavior**\n\nThis field links to pre-configured trading partner settings:\n\n- OPTIONAL: If omitted, uses only the FTP connection details\n- References a Celigo Trading Partner Connector by _id\n- When specified, inherits partner-specific configurations\n"},"directoryPath":{"type":"string","description":"Directory on the FTP/SFTP server to retrieve files from.\n\n- REQUIRED for all FTP exports\n- Can be relative to login directory or absolute path\n- Supports handlebars templates (e.g., `archive/{{date 'YYYY-MM-DD'}}`)\n- Use forward slashes (/) regardless of server OS\n- Path is case-sensitive on UNIX/Linux servers\n\nIMPORTANT: The FTP user must have read permissions on this directory.\n"},"fileNameStartsWith":{"type":"string","description":"Optional prefix filter for filenames.\n\n- Filters files based on starting characters\n- Case-sensitive on most FTP servers\n- Can use static text or handlebars templates\n- Examples:\n  - `\"ORDER_\"` - matches ORDER_123.csv but not order_123.csv\n  - `\"INV_{{date 'YYYYMMDD'}}\"` - matches current date's invoices\n\nWhen used with fileNameEndsWith, files must match both criteria.\n"},"fileNameEndsWith":{"type":"string","description":"Optional suffix filter for filenames.\n\n- Commonly used to filter by file extension\n- Case-sensitive on most FTP servers\n- Examples:\n  - `\".csv\"` - retrieves only CSV files\n  - `\"_FINAL.xml\"` - retrieves only XML files with _FINAL suffix\n  - `\"_READY\"` - retrieves files with status indicator\n\nWhen used with fileNameStartsWith, files must match both criteria.\n"},"backupDirectoryPath":{"type":"string","description":"Optional directory where files are moved before deletion.\n\n- If omitted, files are deleted from the original location after successful export\n- Must be on the same FTP/SFTP server\n- Supports static paths or handlebars templates\n- Examples:\n  - `\"processed\"` - simple archive folder\n  - `\"archive/{{date 'YYYY/MM/DD'}}\"` - date-based hierarchy\n\nIMPORTANT: Celigo automatically deletes files from the source directory after\nsuccessful export. The backup directory is for users who want to maintain their\nown independent backup of exported files. Celigo also maintains its own backup\nof processed files for a set period of time.\n"}}},"JDBC":{"type":"object","description":"Configuration object for JDBC (Java Database Connectivity) data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a JDBC database connection\nand must not be included for other connection types. It defines how data is extracted\nfrom relational databases using SQL queries.\n\n**Jdbc export capabilities**\n- Execute custom SQL SELECT statements\n- Support for joins, aggregations, and functions\n- Flexible data retrieval from any accessible tables or views\n- Compatible with all major database systems\n**Critical:** WHAT BELONGS IN THIS OBJECT\n- `query` - **ALWAYS REQUIRED** - The SQL SELECT statement\n- `once` - **REQUIRED** when the export's Object Type is `\"once\"` (set _include_once: true)\n- **DO NOT** put `delta` inside this object - delta is handled via the query\n\n**Delta exports (type: \"delta\")**\nFor delta/incremental exports, do NOT populate a `delta` object inside `jdbc`.\nInstead, use `{{lastExportDateTime}}` or `{{currentExportDateTime}}` directly in the query:\n```json\n{\n  \"type\": \"delta\",\n  \"jdbc\": {\n    \"query\": \"SELECT * FROM customers WHERE updatedAt > {{lastExportDateTime}}\"\n  }\n}\n```\n\n**Once exports (type: \"once\")**\nFor once exports (mark records as processed), populate `jdbc.once.query`:\n```json\n{\n  \"type\": \"once\",\n  \"jdbc\": {\n    \"query\": \"SELECT * FROM customers WHERE exported = false\",\n    \"once\": {\n      \"query\": \"UPDATE customers SET exported = true WHERE id = {{record.id}}\"\n    }\n  }\n}\n```\n\n**Standard exports (type: null or not specified)**\nJust provide the query:\n```json\n{\n  \"jdbc\": {\n    \"query\": \"SELECT * FROM customers WHERE status = 'ACTIVE'\"\n  }\n}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - SQL SELECT query for retrieving data from the database.\n\nThis field contains the SQL SELECT statement that will be executed to fetch data\nfrom the database. The query can range from simple table selections to complex\njoins and aggregations.\n\nExamples:\n- Basic: `SELECT id, name, email FROM customers WHERE status = 'ACTIVE'`\n- Join: `SELECT o.id, c.name, o.amount FROM orders o JOIN customers c ON o.customer_id = c.id`\n- Aggregate: `SELECT category, COUNT(*) as count FROM orders GROUP BY category`\n- Parameterized: `SELECT * FROM orders WHERE customer_id = {{record.customer_id}}`\n\n**For delta exports (when top-level type is \"delta\")**\nInclude `{{lastExportDateTime}}` or `{{currentExportDateTime}}` in the WHERE clause:\n- `SELECT * FROM customers WHERE updatedAt > {{lastExportDateTime}}`\n- `SELECT * FROM orders WHERE modified_date >= {{lastExportDateTime}}`\n"},"once":{"type":"object","description":"**CRITICAL: REQUIRED when the export's Object Type is \"once\".**\n\nIf Object Type is \"once\", you MUST set _include_once to true (or include this object).\n\nThis object has ONLY ONE property: \"query\" (a SQL UPDATE string).\nDO NOT create any other properties like \"update\", \"table\", \"set\", \"where\", etc.\n\nCORRECT format:\n```json\n{\"query\": \"UPDATE customers SET exported=true WHERE id={{record.id}}\"}\n```\n\nWRONG format (DO NOT DO THIS):\n```json\n{\"update\": {\"table\": \"customers\", \"set\": {...}}}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - A SQL UPDATE statement string to mark records as processed.\n\nThis is a plain SQL UPDATE query string, NOT a structured object.\n\nThe query runs AFTER each record is successfully exported, setting a flag\nto indicate the record has been processed.\n\nFormat: \"UPDATE <table> SET <column>=<value> WHERE <id_column>={{record.<id_field>}}\"\n\nExample: \"UPDATE customers SET exported=true WHERE id={{record.id}}\"\n\nThe {{record.id}} placeholder is replaced with the actual record ID from each exported row.\n"}}}}},"MongoDB":{"type":"object","description":"Configuration object for MongoDB data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a MongoDB connection\nand must not be included for other connection types. It defines how documents are\nretrieved from MongoDB collections for processing in integrations.\n\nMongoDB exports currently support the following operational modes:\n- Retrieves documents from specified collections\n- Filters documents based on query criteria\n- Selects specific fields with projections\n- Provides NoSQL flexibility with JSON query syntax\n","properties":{"method":{"type":"string","enum":["find"],"description":"Specifies the MongoDB operation to perform when retrieving data.\n\n**Field behavior**\n\nThis field defines the query approach:\n\n- REQUIRED for all MongoDB exports\n- Currently only supports \"find\" operations\n- Determines how other parameters are interpreted\n- Corresponds to MongoDB's db.collection.find() method\n- Future versions may support additional methods\n\n**Query method types**\n\n**Find Method**\n```\n\"method\": \"find\"\n```\n\n- **Behavior**: Retrieves documents from a collection based on filter criteria\n- **MongoDB Equivalent**: db.collection.find(filter, projection)\n- **Required Parameters**: collection\n- **Optional Parameters**: filter, projection\n- **Use Cases**: Standard document retrieval, filtered queries, field selection\n\n**Technical considerations**\n\nThe method selection influences:\n- What other fields must be provided\n- How the query will be executed against MongoDB\n- What indexing strategies should be applied\n- Performance characteristics of the operation\n\nIMPORTANT: While only \"find\" is currently supported, the schema is designed\nfor future expansion to include other MongoDB operations like \"aggregate\"\nfor more complex data transformations and aggregations.\n"},"collection":{"type":"string","description":"Specifies the MongoDB collection to query for documents.\n\n**Field behavior**\n\nThis field identifies the data source:\n\n- REQUIRED for all MongoDB exports\n- Must reference a valid collection in the MongoDB database\n- Case-sensitive according to MongoDB collection naming\n- The primary container for documents to be retrieved\n"},"filter":{"type":"string","description":"Defines query criteria for selecting documents from the collection.\n\n**Field behavior**\n\nThis field narrows document selection:\n\n- OPTIONAL: If omitted, all documents in the collection are returned\n- Contains a MongoDB query document as a JSON string\n- Supports all standard MongoDB query operators\n- Provides precise control over which documents are retrieved\n\n**Query patterns**\n\n**Simple Equality Query**\n```\n\"filter\": \"{\"status\": \"active\"}\"\n```\n\n- **Behavior**: Returns only documents where status equals \"active\"\n- **MongoDB Equivalent**: db.collection.find({\"status\": \"active\"})\n- **Matching Documents**: {\"_id\": 1, \"status\": \"active\", \"name\": \"Example\"}\n- **Use Cases**: Status filtering, category selection, type filtering\n\n**Comparison Operator Query**\n```\n\"filter\": \"{\"createdDate\": {\"$gt\": \"2023-01-01T00:00:00Z\"}}\"\n```\n\n- **Behavior**: Returns documents created after January 1, 2023\n- **MongoDB Equivalent**: db.collection.find({\"createdDate\": {\"$gt\": \"2023-01-01T00:00:00Z\"}})\n- **Operators**: $eq, $gt, $gte, $lt, $lte, $ne, $in, $nin\n- **Use Cases**: Date ranges, numeric thresholds, incremental processing\n\n**Logical Operator Query**\n```\n\"filter\": \"{\"$or\": [{\"status\": \"pending\"}, {\"status\": \"processing\"}]}\"\n```\n\n- **Behavior**: Returns documents with either pending or processing status\n- **MongoDB Equivalent**: db.collection.find({\"$or\": [{\"status\": \"pending\"}, {\"status\": \"processing\"}]})\n- **Operators**: $and, $or, $nor, $not\n- **Use Cases**: Multiple conditions, alternative criteria, complex filtering\n\n**Nested Document Query**\n```\n\"filter\": \"{\"address.country\": \"USA\"}\"\n```\n\n- **Behavior**: Returns documents where the nested country field equals \"USA\"\n- **MongoDB Equivalent**: db.collection.find({\"address.country\": \"USA\"})\n- **Dot Notation**: Accesses nested document fields\n- **Use Cases**: Nested data filtering, object property matching\n\n**Handlebars Template Query**\n```\n\"filter\": \"{\"customerId\": \"{{record.customer_id}}\", \"status\": \"{{record.status}}\"}\"\n```\n\n- **Behavior**: Dynamically filters based on record field values\n- **MongoDB Equivalent**: db.collection.find({\"customerId\": \"123\", \"status\": \"active\"})\n- **Template Variables**: Values replaced at runtime with actual record data\n- **Use Cases**: Dynamic filtering, context-aware queries, relational lookups\n\n**Incremental Processing Query**\n```\n\"filter\": \"{\"lastModified\": {\"$gt\": \"{{lastRun}}\"}}\"\n```\n\n- **Behavior**: Returns only documents modified since last execution\n- **MongoDB Equivalent**: db.collection.find({\"lastModified\": {\"$gt\": \"2023-06-15T10:30:00Z\"}})\n- **System Variables**: {{lastRun}} replaced with timestamp of previous execution\n- **Use Cases**: Change data capture, delta synchronization, incremental updates\n"},"projection":{"type":"string","description":"Controls which fields are included or excluded from returned documents.\n\n**Field behavior**\n\nThis field optimizes data retrieval:\n\n- OPTIONAL: If omitted, all fields are returned\n- Contains a MongoDB projection document as a JSON string\n- Can include fields (1) or exclude fields (0), but not both (except _id)\n- Helps minimize data transfer by selecting only needed fields\n\n**Projection patterns**\n\n**Field Inclusion Projection**\n```\n\"projection\": \"{\"name\": 1, \"email\": 1, \"_id\": 0}\"\n```\n\n- **Behavior**: Returns only name and email fields, excludes _id\n- **MongoDB Equivalent**: db.collection.find({}, {\"name\": 1, \"email\": 1, \"_id\": 0})\n- **Result Format**: {\"name\": \"Example\", \"email\": \"user@example.com\"}\n- **Use Cases**: Specific field selection, minimizing payload size\n\n**Field Exclusion Projection**\n```\n\"projection\": \"{\"password\": 0, \"internal_notes\": 0}\"\n```\n\n- **Behavior**: Returns all fields except password and internal_notes\n- **MongoDB Equivalent**: db.collection.find({}, {\"password\": 0, \"internal_notes\": 0})\n- **Result Impact**: Removes sensitive or unnecessary fields\n- **Use Cases**: Security filtering, removing large fields, data protection\n\n**Nested Field Projection**\n```\n\"projection\": \"{\"profile.firstName\": 1, \"profile.lastName\": 1, \"orders\": 1, \"_id\": 0}\"\n```\n\n- **Behavior**: Returns only specific nested fields and the orders array\n- **MongoDB Equivalent**: db.collection.find({}, {\"profile.firstName\": 1, \"profile.lastName\": 1, \"orders\": 1, \"_id\": 0})\n- **Dot Notation**: Accesses specific nested document fields\n- **Use Cases**: Partial nested document selection, specific array inclusion\n\n**Technical considerations**\n\n- Maximum size: 128KB\n- Must be a valid JSON string representing a MongoDB projection\n- Cannot mix inclusion and exclusion modes (except _id field)\n- _id field is included by default unless explicitly excluded\n- Projection does not affect which documents are returned, only their fields\n\nIMPORTANT: When working with nested documents or arrays, be aware that including\na specific field path does not automatically include parent documents or arrays.\nFor example, including \"addresses.zipcode\" will only return that specific field,\nnot the entire addresses array or documents within it.\n"}}},"NetSuite":{"type":"object","description":"Configuration object for NetSuite data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a NetSuite connection\nand must not be included for other connection types. It defines how data is extracted\nfrom NetSuite, including saved searches, RESTlets, and distributed/SuiteApp exports.\n\n**Netsuite export modes**\n\nNetSuite exports support several operating modes:\n\n1. **Saved Search Exports** - Uses NetSuite saved searches to retrieve data\n2. **RESTlet Exports** - Uses custom RESTlet scripts for data retrieval\n3. **Distributed Exports** - Uses SuiteApp for real-time or batch processing\n4. **Blob Exports** - Retrieves files from the NetSuite file cabinet and transfers them WITHOUT parsing them into records (raw binary transfer)\n5. **File Exports** - Retrieves files from the NetSuite file cabinet and PARSES them into records (CSV, XML, JSON, etc.)\n\n**Critical:** Blob vs File Export Configuration\n\nThe export `type` field at the top level determines whether file content is parsed:\n\n- **For Blob Exports (no parsing)**: Set the export's `type: \"blob\"` AND configure `netsuite.blob`\n- **For File Exports (with parsing)**: Leave the export's `type` as null/undefined AND configure `netsuite.file`\n\nDo NOT set `type: \"blob\"` when you want file content parsed into records. The \"blob\" type is specifically for raw file transfers without any parsing.\n\n**Implementation requirements**\n\n- For saved search exports: Configure the `searches` or `type` properties\n- For RESTlet exports: Configure the `restlet` property with script details\n- For distributed exports: Configure the `distributed` property\n- For blob exports (no parsing): Set export `type: \"blob\"` and configure `netsuite.blob`\n- For file exports (with parsing): Leave export `type` null and configure `netsuite.file`\n","properties":{"type":{"type":"string","enum":["search","basicSearch","metadata","selectoption","restlet","getList","getServerTime","distributed","file"],"description":"Specifies the NetSuite export operation type. This determines how data is retrieved from NetSuite.\n\n**Critical:** File exports vs Blob exports\n\n- **File exports (with parsing)**: Set netsuite.type to \"file\" and configure netsuite.file.folderInternalId\n- **Blob exports (raw transfer, no parsing)**: Leave netsuite.type BLANK/null, set the export's top-level type to \"blob\", and configure netsuite.internalId\n\nDo NOT set netsuite.type to \"file\" for blob exports. For blob exports, this property should be omitted or null.\n\n**Recommended types**\n\n- **For Lookups (isLookup: true)**:\n    - **PREFER \"restlet\"**: This allows you to use `suiteapp2.0` saved searches with dynamic inputs easily.\n    - **AVOID \"search\"**: Standard search type is often limited for dynamic lookups.\n\n**Valid values**\n- \"search\" - Use a saved search to retrieve records\n- \"basicSearch\" - Use a basic search query\n- \"metadata\" - Retrieve record metadata\n- \"selectoption\" - Retrieve select options for a field\n- \"restlet\" - Use a RESTlet for custom data retrieval\n- \"getList\" - Retrieve a list of records by internal IDs\n- \"getServerTime\" - Get the NetSuite server time\n- \"distributed\" - Use distributed/SuiteApp for real-time exports\n- \"file\" - Export files from NetSuite file cabinet WITH parsing into records\n- null/omitted - For blob exports or other export types\n\n**Implementation guidance**\n- For file exports WITH parsing: Set netsuite.type to \"file\" and configure netsuite.file.folderInternalId\n- For blob exports (no parsing): Leave netsuite.type blank, set export type to \"blob\", configure netsuite.internalId\n- For saved search exports: Set type to \"search\" and configure netsuite.searches\n- For RESTlet exports: Set type to \"restlet\" and configure netsuite.restlet\n- For distributed/real-time exports: Set type to \"distributed\" and configure netsuite.distributed\n\n**Examples**\n- \"file\" - For file cabinet exports with parsing\n- \"search\" - For saved search exports\n- null - For blob exports (raw file transfer without parsing)"},"searches":{"type":"array","description":"An array of search configurations used to query and retrieve data from NetSuite.\nEach search object defines a saved search or ad-hoc query configuration.\n\n**Structure**\nEach item in the array is an object with the following properties:\n- savedSearchId: The internal ID of a saved search in NetSuite (string)\n- recordType: The NetSuite record type being searched (string, e.g., \"customer\", \"salesorder\")\n- criteria: Array of search criteria/filters (optional)\n\n**Examples**\n```json\n[\n  {\n    \"savedSearchId\": \"10\",\n    \"recordType\": \"customer\",\n    \"criteria\": []\n  }\n]\n```\n\n**Implementation guidance**\n- Use savedSearchId to reference an existing saved search in NetSuite\n- recordType should match a valid NetSuite record type\n- criteria can be used to add additional filters to the search","items":{"type":"object","properties":{"savedSearchId":{"type":"string","description":"The internal ID of a saved search in NetSuite"},"recordType":{"type":"string","description":"The lowercase script ID of the NetSuite record type being searched.\n\nMust be the exact lowercase script ID as defined in NetSuite (e.g., \"customer\", \"salesorder\", \"invoice\", \"vendorbill\").\nThis is NOT the display name - use the script ID which is always lowercase with no spaces."},"criteria":{"type":"array","description":"Array of search criteria/filters to apply","items":{"type":"object"}}}}},"metadata":{"type":"object","description":"metadata: >\n  A collection of key-value pairs that provide additional contextual information about the NetSuite entity. This metadata can include custom attributes, tags, or any supplementary data that helps to describe, categorize, or operationally enhance the entity beyond its standard properties. It serves as an extensible mechanism to store user-defined or system-generated information that is not part of the core entity schema, enabling greater flexibility and customization in managing NetSuite data.\n  **Field behavior**\n  - Stores arbitrary additional information related to the NetSuite entity, enhancing its descriptive or operational context.\n  - Can include custom fields defined by users, system-generated tags, flags, timestamps, or nested structured data.\n  - Typically represented as a dictionary or map with string keys and values that may be strings, numbers, booleans, arrays, or nested objects.\n  - Metadata entries are optional and do not affect the core entity behavior unless explicitly integrated with business logic.\n  - Supports dynamic addition, update, and removal of metadata entries without impacting the primary entity schema.\n  **Implementation guidance**\n  - Ensure all metadata keys are unique within the collection to prevent accidental overwrites.\n  - Support flexible and heterogeneous value types, including primitive types and nested structures, to accommodate diverse metadata needs.\n  - Validate keys and values against naming conventions, length restrictions, and allowed character sets to maintain consistency and prevent errors.\n  - Implement efficient mechanisms for CRUD (Create, Read, Update, Delete) operations on metadata entries to facilitate easy management.\n  - Consider indexing frequently queried metadata keys for performance optimization.\n  - Provide clear documentation or schema definitions for any standardized or commonly used metadata keys.\n  **Examples**\n  - {\"department\": \"Sales\", \"region\": \"EMEA\", \"priority\": \"high\"}\n  - {\"customField1\": \"value1\", \"customFlag\": true}\n  - {\"tags\": [\"urgent\", \"review\"], \"lastUpdatedBy\": \"user123\"}\n  - {\"approvalStatus\": \"pending\", \"reviewCount\": 3, \"metadataVersion\": 2}\n  - {\"nestedInfo\": {\"createdBy\": \"admin\", \"createdAt\": \"2024-05-01T12:00:00Z\"}}\n  **Important notes**\n  - Metadata is optional and should not interfere with the core functionality or validation of the NetSuite entity.\n  - Modifications to metadata typically do not trigger business workflows or logic unless explicitly configured to do so."},"selectoption":{"type":"object","description":"selectoption: >\n  Represents a selectable option within a NetSuite field, typically used in dropdown menus, radio buttons, or other selection controls. Each selectoption consists of a user-friendly label and an associated value that uniquely identifies the option internally. This structure enables consistent data entry, filtering, and categorization within NetSuite forms and records.\n  **Field behavior**\n  - Defines a single, discrete choice available to users in selection interfaces such as dropdowns, radio buttons, or multi-select lists.\n  - Can be part of a collection of options presented to the user for making a selection.\n  - Includes both a display label (visible to users) and a corresponding value (used internally or in API interactions).\n  - Supports filtering, categorization, and conditional logic based on the selected option.\n  - May be dynamically generated or statically defined depending on the field configuration.\n  **Implementation guidance**\n  - Assign a unique and stable value to each selectoption to prevent ambiguity and maintain data integrity.\n  - Use clear, concise, and user-friendly labels that accurately describe the option’s meaning.\n  - Validate option values against expected data types and formats to ensure compatibility with backend processing.\n  - Implement localization strategies for labels to support multiple languages without altering the underlying values.\n  - Consistently apply selectoption structures across all fields requiring predefined choices to standardize user experience.\n  - Consider accessibility best practices when designing labels and selection controls.\n  **Examples**\n  - { label: \"Active\", value: \"1\" }\n  - { label: \"Inactive\", value: \"2\" }\n  - { label: \"Pending Approval\", value: \"3\" }\n  - { label: \"High Priority\", value: \"high\" }\n  - { label: \"Low Priority\", value: \"low\" }\n  **Important notes**\n  - The label is intended for display purposes and may be localized; the value is the definitive identifier used in data processing and API calls.\n  - Values should remain consistent over time to avoid breaking integrations or corrupting data.\n  - When supporting multiple languages, labels should be translated appropriately while keeping values unchanged.\n  - Changes to selectoption values or labels should be managed carefully to prevent unintended side effects.\n  - Selectoption entries may be influenced by the context of the parent record, user roles, or permissions.\n  **Dependency chain**\n  - Utilized within field definitions that support selection inputs (e"},"customFieldMetadata":{"type":"object","description":"customFieldMetadata: Metadata information related to custom fields defined within the NetSuite environment, providing comprehensive details about each custom field's configuration, behavior, and constraints to facilitate accurate data handling and UI generation.\n**Field behavior**\n- Contains detailed metadata about custom fields, including their definitions, types, configurations, and constraints.\n- Provides contextual information necessary for understanding, validating, and manipulating custom fields programmatically.\n- May include attributes such as field ID, label, data type, default values, validation rules, display settings, sourcing information, and field dependencies.\n- Used to dynamically interpret or generate UI elements, data validation logic, or data structures based on custom field configurations.\n- Reflects the current state of custom fields as defined in the NetSuite account, enabling synchronization between the API consumer and the NetSuite environment.\n**Implementation guidance**\n- Ensure that the metadata accurately reflects the current state of custom fields in the NetSuite account by synchronizing regularly or on configuration changes.\n- Update the metadata whenever custom fields are added, modified, or removed to maintain consistency and prevent data integrity issues.\n- Use this metadata to validate input data against custom field constraints (e.g., data type, required status, allowed values) before processing or submission.\n- Consider caching metadata for performance optimization but implement mechanisms to refresh it periodically or on-demand to capture updates.\n- Handle cases where customFieldMetadata might be null, incomplete, or partially loaded gracefully, including fallback logic or error handling.\n- Respect user permissions and access controls when retrieving or exposing custom field metadata to ensure compliance with security policies.\n**Examples**\n- A custom field metadata object describing a custom checkbox field with ID \"custfield_123\", label \"Approved\", default value false, and display type \"inline\".\n- Metadata for a custom list/record field specifying the list of valid options, their internal IDs, and whether multiple selections are allowed.\n- Information about a custom date field including its date format, minimum and maximum allowed dates, and any validation rules applied.\n- Metadata describing a custom currency field with precision settings and default currency.\n**Important notes**\n- The structure and content of customFieldMetadata may vary depending on the NetSuite configuration, customizations, and API version.\n- Access to custom field metadata may require appropriate permissions within the NetSuite environment; unauthorized access may result in incomplete or no metadata.\n- Changes to custom fields in NetSuite (such as renaming, deleting, or changing data types) can impact the metadata and"},"skipGrouping":{"type":"boolean","description":"skipGrouping: Indicates whether to bypass the grouping of related records or transactions during processing, allowing each item to be handled individually rather than aggregated into groups.\n\n**Field behavior**\n- When set to true, the system processes each record or transaction independently, without combining them into groups based on shared attributes.\n- When set to false or omitted, related records or transactions are aggregated according to predefined grouping criteria (e.g., by customer, date, or transaction type) before processing.\n- Influences how data is structured, summarized, and reported in outputs or passed to downstream systems.\n- Affects the level of detail and granularity available in the processed data.\n\n**Implementation guidance**\n- Utilize this flag to control processing granularity, especially when detailed, record-level analysis or reporting is required.\n- Confirm that downstream systems, reports, or integrations can accommodate ungrouped data if skipGrouping is enabled.\n- Assess the potential impact on system performance and data volume, as disabling grouping may significantly increase the number of processed items.\n- Consider the use case carefully: grouping is generally preferred for summary reports, while skipping grouping suits detailed audits or troubleshooting.\n\n**Examples**\n- skipGrouping: true — Processes each transaction separately, providing detailed, unaggregated data.\n- skipGrouping: false — Groups transactions by customer or date, producing summarized results.\n- skipGrouping omitted — Defaults to grouping enabled, aggregating related records.\n\n**Important notes**\n- Enabling skipGrouping can lead to increased processing time, higher memory usage, and larger output datasets.\n- Some reports, dashboards, or integrations may require grouped data; verify compatibility before enabling this option.\n- The default behavior is typically grouping enabled (skipGrouping = false) unless explicitly overridden.\n- Changes to this setting may affect data consistency and comparability with previously generated reports.\n\n**Dependency chain**\n- Often depends on other properties that define grouping keys or criteria (e.g., groupBy fields).\n- May interact with filtering, sorting, or pagination settings within the processing pipeline.\n- Could influence or be influenced by aggregation functions or summary calculations applied downstream.\n\n**Technical details**\n- Data type: Boolean.\n- Default value: false (grouping enabled).\n- Implemented as a conditional flag checked during the data aggregation phase.\n- When true, bypasses aggregation logic and processes each record individually.\n- Typically integrated into the processing workflow to toggle between grouped and ungrouped data handling."},"statsOnly":{"type":"boolean","description":"statsOnly indicates whether the API response should include only aggregated statistical summary data without any detailed individual records. This property is used to optimize response size and improve performance when detailed data is unnecessary.\n\n**Field behavior**\n- When set to true, the API returns only summary statistics such as counts, averages, sums, or other aggregate metrics.\n- When set to false or omitted, the response includes both detailed data records and the associated statistical summaries.\n- Helps reduce network bandwidth and processing time by excluding verbose record-level data.\n- Primarily intended for use cases like dashboards, reports, or monitoring tools where only high-level metrics are required.\n\n**Implementation guidance**\n- Default the value to false to ensure full data retrieval unless explicitly requesting summary-only data.\n- Validate that the input is a boolean to prevent unexpected API behavior.\n- Use this flag selectively in scenarios where detailed records are not needed to avoid loss of critical information.\n- Ensure the API endpoint supports this flag before usage, as some endpoints may not implement statsOnly functionality.\n- Adjust client-side logic to handle different response structures depending on the flag’s value.\n\n**Examples**\n- statsOnly: true — returns only aggregated statistics such as total counts, averages, or sums without any detailed entries.\n- statsOnly: false — returns full detailed data records along with statistical summaries.\n- statsOnly omitted — defaults to false, returning detailed data and statistics.\n\n**Important notes**\n- Enabling statsOnly disables access to individual record details, which may limit in-depth data analysis.\n- The response schema changes significantly when statsOnly is true; clients must handle these differences gracefully.\n- Some API endpoints may not support this property; verify compatibility in the API documentation.\n- Pagination parameters may be ignored or behave differently when statsOnly is enabled, since detailed records are excluded.\n\n**Dependency chain**\n- May interact with filtering, sorting, or date range parameters that influence the statistical data returned.\n- Can affect pagination logic because detailed records are omitted when statsOnly is true.\n- Dependent on the API endpoint’s support for summary-only responses.\n\n**Technical details**\n- Data type: Boolean.\n- Default value: false.\n- Typically implemented as a query parameter or part of the request payload depending on API design.\n- Alters the response payload structure by excluding detailed record arrays and including only aggregated metrics.\n- Helps optimize API performance and reduce response payload size in scenarios where detailed data is unnecessary."},"internalId":{"type":"string","description":"The internal ID of a specific file in the NetSuite file cabinet to export.\n\n**Critical:** Required for blob exports\n\nThis property is REQUIRED when the export type is \"blob\". For blob exports, you must specify the internalId of the file to export from the NetSuite file cabinet.\n\n**Field behavior**\n- Identifies a specific file in the NetSuite file cabinet by its internal ID\n- Required for blob exports (raw binary file transfers without parsing)\n- The file at this internal ID will be exported as-is without parsing\n\n**Implementation guidance**\n- For blob exports: Set netsuite.type to \"blob\" (on the export, not netsuite) and provide netsuite.internalId\n- Obtain the file's internalId from NetSuite's file cabinet or via API\n- Validate the internalId corresponds to an existing file before export\n\n**Examples**\n- \"12345\" - Internal ID of a specific file\n- \"67890\" - Another file internal ID\n\n**Important notes**\n- This is different from netsuite.file.folderInternalId which specifies a folder for file exports with parsing\n- For blob exports: Use netsuite.internalId (file ID)\n- For file exports with parsing: Use netsuite.file.folderInternalId (folder ID)"},"blob":{"type":"object","properties":{"purgeFileAfterExport":{"type":"boolean","description":"purgeFileAfterExport: Whether to delete the file from the system after it has been successfully exported. This property controls the automatic removal of the source file post-export to help manage storage and maintain system cleanliness.\n\n**Field behavior**\n- Determines if the exported file should be removed from the source storage immediately after a successful export operation.\n- When set to `true`, the system deletes the file as soon as the export completes without errors.\n- When set to `false` or omitted, the file remains intact in its original location after export.\n- Facilitates automated cleanup of files to prevent unnecessary storage consumption.\n- Does not affect the export process itself; deletion occurs only after confirming export success.\n\n**Implementation guidance**\n- Confirm that the export operation has fully completed and succeeded before initiating file deletion to avoid data loss.\n- Verify that the executing user or system has sufficient permissions to delete files from the source location.\n- Assess downstream workflows or processes that might require access to the file after export before enabling purging.\n- Implement logging or notification mechanisms to record when files are purged for audit trails and troubleshooting.\n- Consider integrating with retention policies or backup systems to prevent accidental loss of important data.\n\n**Examples**\n- `true` — The file will be deleted immediately after a successful export.\n- `false` — The file will remain in the source location after export.\n- Omitted or `null` — Defaults to `false`, meaning the file is retained post-export.\n\n**Important notes**\n- File deletion is permanent and cannot be undone; ensure that the file is no longer needed before enabling this option.\n- Use caution in multi-user or multi-process environments where files may be shared or required beyond the export operation.\n- Immediate purging may interfere with backup, archival, or compliance requirements if files are deleted too soon.\n- Consider implementing safeguards or confirmation steps if enabling automatic purging in production environments.\n\n**Dependency chain**\n- Relies on the successful completion of the export operation to trigger file deletion.\n- Dependent on file system permissions and access controls to allow deletion.\n- May be affected by other system settings related to file retention, archival, or cleanup policies.\n- Could interact with error handling mechanisms to prevent deletion if export fails or is incomplete.\n\n**Technical details**\n- Typically represented as a boolean value (`true` or `false`).\n- Default behavior is to retain files unless explicitly set to `true`.\n- Deletion should be performed using secure"}},"description":"blob: Configuration for retrieving raw binary files from NetSuite file cabinet WITHOUT parsing them into records. Use this for binary file transfers (images, PDFs, executables) where the file content should be transferred as-is.\n\n**Critical:** Blob export configuration\n\nFor blob exports, configure:\n1. Set the export's top-level `type` to \"blob\"\n2. Set `netsuite.internalId` to the file's internal ID\n3. Leave `netsuite.type` blank/null (do NOT set it to \"file\")\n4. Optionally configure `netsuite.blob.purgeFileAfterExport`\n\n**When to use blob vs file**\n- Blob exports: Raw binary transfer WITHOUT parsing - leave netsuite.type blank\n- File exports: Parse file contents into records - set netsuite.type to \"file\"\n\nDo NOT use blob configuration when you want file content parsed into data records.\n\n**Field behavior**\n- Stores raw binary data including files, images, audio, video, or any non-textual content.\n- Supports download operations for binary content from the NetSuite file cabinet.\n- File content is transferred as-is without any parsing or transformation.\n- May be immutable or mutable depending on the specific NetSuite entity and operation.\n- Requires careful handling to maintain data integrity during transmission and storage.\n\n**Implementation guidance**\n- Always encode binary data (e.g., using base64) when transmitting over text-based protocols such as JSON or XML to ensure data integrity.\n- Validate the size of the blob against NetSuite API limits and storage constraints to prevent errors or truncation.\n- Implement secure handling practices, including encryption in transit and at rest, to protect sensitive binary data.\n- Use appropriate MIME/content-type headers when uploading or downloading blobs to correctly identify the data format.\n- Consider chunked uploads/downloads or streaming for large blobs to optimize performance and resource usage.\n- Ensure consistent encoding and decoding mechanisms between client and server to avoid data corruption.\n\n**Examples**\n- A base64-encoded PDF document attached to a NetSuite customer record.\n- An image file (PNG or JPEG) stored as a blob for product catalog entries.\n- A binary export of transaction data in a proprietary format used for integration with external systems.\n- Audio or video files associated with marketing campaigns or training materials.\n- Encrypted binary blobs containing sensitive configuration or credential data.\n\n**Important notes**\n- Blob size may be limited by NetSuite API constraints or underlying storage capabilities; exceeding these limits can cause failures.\n- Encoding and decoding must be consistent and correctly implemented to prevent data corruption or loss.\n- Large blobs should be handled using chunked or streamed transfers to avoid memory issues and improve reliability.\n- Security is paramount; blobs may contain sensitive information requiring encryption and strict access controls.\n- Access to blob data typically requires proper authentication and authorization aligned with NetSuite’s security model.\n\n**Dependency chain**\n- Dependent on authentication and authorization mechanisms"},"restlet":{"type":"object","properties":{"recordType":{"type":"string","description":"recordType specifies the type of NetSuite record that the RESTlet will interact with. This property determines the schema, validation rules, and operations applicable to the record within the NetSuite environment, directly influencing how data is processed and managed by the RESTlet.\n\n**Field behavior**\n- Defines the specific NetSuite record type (e.g., customer, salesOrder, invoice) targeted by the RESTlet.\n- Influences the structure and format of data payloads sent to and received from the RESTlet.\n- Controls validation rules, mandatory fields, and available operations based on the selected record type.\n- Affects permissions and access controls enforced during RESTlet execution, ensuring compliance with NetSuite security settings.\n- Determines the applicable business logic and workflows triggered by the RESTlet for the specified record type.\n\n**Implementation guidance**\n- Use the exact internal ID or script ID of the NetSuite record type as recognized by the NetSuite system to ensure accurate targeting.\n- Validate the recordType value against the list of supported NetSuite record types to prevent runtime errors and ensure compatibility.\n- Confirm that the RESTlet script has the necessary permissions and roles assigned to access and manipulate the specified record type.\n- When handling multiple record types dynamically, implement conditional logic to accommodate differences in data structure and processing requirements.\n- For custom record types, always use the script ID format (e.g., \"customrecord_myCustomRecord\") to avoid ambiguity.\n- Test RESTlet behavior thoroughly after changing the recordType to verify correct handling of data and operations.\n\n**Examples**\n- \"customer\"\n- \"salesOrder\"\n- \"invoice\"\n- \"employee\"\n- \"customrecord_myCustomRecord\"\n- \"vendor\"\n- \"purchaseOrder\"\n\n**Important notes**\n- The recordType must correspond to a valid and supported NetSuite record type; invalid values will cause API calls to fail.\n- Custom record types require referencing by their script IDs, which typically start with \"customrecord_\".\n- Modifying the recordType may necessitate updates to the RESTlet’s codebase to handle different data schemas and business logic.\n- Permissions and role restrictions in NetSuite can limit access to certain record types, impacting RESTlet functionality.\n- Consistency in recordType usage is critical for maintaining data integrity and predictable RESTlet behavior.\n\n**Dependency chain**\n- Depends on the NetSuite environment’s available record types and their configurations.\n- Influences the RESTlet’s data validation, processing logic, and response formatting.\n-"},"searchId":{"type":"string","description":"searchId: The unique identifier for a saved search in NetSuite, used to specify which saved search the RESTlet should execute. This ID corresponds to the internal ID assigned to saved searches within the NetSuite system. It enables the RESTlet to run predefined queries and retrieve data based on the saved search’s criteria and configuration.\n\n**Field behavior**\n- Specifies the exact saved search to be executed by the RESTlet.\n- Must correspond to a valid and existing saved search internal ID within the NetSuite account.\n- Determines the dataset and filters applied when retrieving search results.\n- Typically required when invoking the RESTlet to perform search operations.\n- Influences the structure and content of the response based on the saved search definition.\n\n**Implementation guidance**\n- Verify that the searchId matches an existing saved search internal ID in the target NetSuite environment.\n- Validate the searchId format and existence before making the RESTlet call to prevent runtime errors.\n- Use the internal ID as a string or numeric value consistent with NetSuite’s conventions.\n- Implement error handling for scenarios where the searchId is invalid, missing, or inaccessible due to permission restrictions.\n- Ensure the integration role or user has appropriate permissions to access and execute the saved search.\n- Consider caching or documenting frequently used searchIds to improve maintainability.\n\n**Examples**\n- \"1234\" — a numeric internal ID representing a specific saved search.\n- \"5678\" — another valid saved search internal ID.\n- \"1001\" — an example of a saved search ID used to retrieve customer records.\n- \"2002\" — a saved search ID configured to return transaction data.\n\n**Important notes**\n- The searchId must be accessible by the user or integration role making the RESTlet call; otherwise, access will be denied.\n- Providing an incorrect or non-existent searchId will result in errors or empty search results.\n- Permissions and sharing settings on the saved search directly affect the data returned by the RESTlet.\n- The saved search must be properly configured with the desired filters, columns, and criteria to ensure meaningful results.\n- Changes to the saved search (e.g., modifying filters or columns) will impact the RESTlet output without changing the searchId.\n\n**Dependency chain**\n- Depends on the existence of a saved search configured in the NetSuite account.\n- Requires appropriate user or integration role permissions to access the saved search.\n- Relies on the saved search’s configuration (filters, columns, criteria) to"},"useSS2Restlets":{"type":"boolean","description":"useSS2Restlets: >\n  Specifies whether to use SuiteScript 2.0 RESTlets for API interactions instead of SuiteScript 1.0 RESTlets. This setting controls the version of RESTlets invoked during API communication with NetSuite, impacting compatibility, performance, and available features.\n  **Field behavior**\n  - Determines the RESTlet version used for all API interactions within the NetSuite integration.\n  - When set to `true`, the system exclusively uses SuiteScript 2.0 RESTlets.\n  - When set to `false` or omitted, SuiteScript 1.0 RESTlets are used by default.\n  - Influences the structure, capabilities, and response formats of API calls.\n  **Implementation guidance**\n  - Enable this flag to take advantage of SuiteScript 2.0’s improved modularity, asynchronous capabilities, and modern JavaScript syntax.\n  - Verify that SuiteScript 2.0 RESTlets are properly deployed, configured, and accessible in the target NetSuite environment before enabling.\n  - Conduct comprehensive testing to ensure existing integrations and workflows remain functional when switching from SuiteScript 1.0 to 2.0 RESTlets.\n  - Coordinate with NetSuite administrators and developers to update or rewrite RESTlets if necessary.\n  **Examples**\n  - `true` — API calls will utilize SuiteScript 2.0 RESTlets, enabling modern scripting features.\n  - `false` — API calls will continue using legacy SuiteScript 1.0 RESTlets for backward compatibility.\n  **Important notes**\n  - SuiteScript 2.0 RESTlets support modular script architecture and ES6+ JavaScript features, improving maintainability and performance.\n  - Legacy RESTlets written in SuiteScript 1.0 may not be compatible with SuiteScript 2.0; migration or parallel support might be required.\n  - Switching RESTlet versions can change API response formats and behaviors, potentially impacting downstream systems.\n  - Ensure proper version control and rollback plans are in place when changing this setting.\n  **Dependency chain**\n  - Depends on the deployment and availability of SuiteScript 2.0 RESTlets within the NetSuite account.\n  - Requires that the API client and integration logic support the RESTlet version selected.\n  - May depend on other configuration settings related to authentication and script permissions.\n  **Technical details**\n  - SuiteScript 2.0 RESTlets use the AMD module format and support"},"restletVersion":{"type":"object","properties":{"type":{"type":"string","description":"The type property specifies the version type of the NetSuite Restlet being used. It determines the specific version or variant of the Restlet API that the integration will interact with, ensuring compatibility and correct functionality. This property is essential for correctly routing requests, handling responses, and maintaining alignment with the expected API contract for the chosen Restlet version.\n\n**Field behavior**\n- Defines the version category or variant of the NetSuite Restlet API.\n- Influences request formatting, response parsing, and available features.\n- Determines which API endpoints and methods are accessible.\n- May impact authentication mechanisms and data serialization formats.\n- Ensures that the integration communicates with the correct Restlet version to prevent incompatibility issues.\n\n**Implementation guidance**\n- Use only predefined and officially supported Restlet version types provided by NetSuite.\n- Validate the type value against the current list of supported Restlet versions before deployment.\n- Update the type property when upgrading to a newer Restlet version or switching to a different variant.\n- Include the type property within the restletVersion object to explicitly specify the API version.\n- Coordinate changes to this property with client applications and integration workflows to maintain compatibility.\n- Monitor NetSuite release notes and documentation for any changes or deprecations related to Restlet versions.\n\n**Examples**\n- \"1.0\" — specifying the stable Restlet API version 1.0.\n- \"2.0\" — specifying the newer Restlet API version 2.0 with enhanced features.\n- \"beta\" — indicating a beta or experimental Restlet version for testing purposes.\n- \"custom\" — representing a custom or extended Restlet version tailored for specific use cases.\n\n**Important notes**\n- Providing an incorrect or unsupported type value can cause API calls to fail or behave unpredictably.\n- The type must be consistent with the NetSuite environment configuration and deployment settings.\n- Changing the type may necessitate updates in client-side code, authentication flows, and data handling logic.\n- Always consult the latest official NetSuite documentation to verify supported Restlet versions and their characteristics.\n- The type property is critical for maintaining long-term integration stability and compatibility.\n\n**Dependency chain**\n- Depends on the restlet object to define the overall Restlet configuration.\n- Influences other properties related to authentication, endpoint URLs, and data formats within the restletVersion context.\n- May affect downstream processing components that rely on version-specific behaviors.\n\n**Technical details**\n- Typically represented as a string value"},"enum":{"type":"array","items":{"type":"object"},"description":"A list of predefined string values that the `restletVersion` property can accept, representing the supported versions of the NetSuite RESTlet API. This enumeration restricts the input to specific allowed versions to ensure compatibility, prevent invalid version assignments, and facilitate validation and user interface enhancements.\n\n**Field behavior**\n- Defines the complete set of valid version identifiers for the `restletVersion` property.\n- Ensures that only officially supported RESTlet API versions can be selected or submitted.\n- Enables validation mechanisms to reject unsupported or malformed version inputs.\n- Supports auto-completion and dropdown selections in user interfaces and API clients.\n- Helps maintain consistency and compatibility across different API integrations and deployments.\n\n**Implementation guidance**\n- Populate the enum with all currently supported NetSuite RESTlet API versions as defined by official documentation.\n- Regularly update the enum values to reflect newly released versions or deprecated ones.\n- Use clear and consistent string formats that match the official versioning scheme (e.g., semantic versions like \"1.0\", \"2.0\" or date-based versions like \"2023.1\").\n- Implement strict validation logic to reject any input not included in the enum.\n- Consider backward compatibility when adding or removing enum values to avoid breaking existing integrations.\n\n**Examples**\n- [\"1.0\", \"2.0\", \"2.1\"]\n- [\"2023.1\", \"2023.2\", \"2024.1\"]\n- [\"v1\", \"v2\", \"v3\"] (if applicable based on versioning scheme)\n\n**Important notes**\n- Enum values must strictly align with the official NetSuite RESTlet API versioning scheme to ensure correctness.\n- Using a version value outside this enum should trigger a validation error and prevent API calls or configuration saves.\n- The enum acts as a safeguard against runtime errors caused by unsupported or invalid version usage.\n- Changes to this enum should be communicated clearly to all API consumers to manage version compatibility.\n\n**Dependency chain**\n- This enum is directly associated with and constrains the `restletVersion` property.\n- Updates to supported RESTlet API versions necessitate corresponding updates to this enum.\n- Validation logic and UI components rely on this enum to enforce version correctness.\n- Downstream processes that depend on the `restletVersion` value are indirectly dependent on this enum’s accuracy.\n\n**Technical details**\n- Implemented as a string enumeration type within the API schema.\n- Used by validation middleware or schema"},"lowercase":{"type":"object","description":"lowercase: Specifies whether the restlet version string should be converted to lowercase characters to ensure consistent formatting.\n\n**Field behavior**\n- Determines if the restlet version identifier is transformed entirely to lowercase characters.\n- When set to true, the version string is converted to lowercase before any further processing or output.\n- When set to false or omitted, the version string retains its original casing as provided.\n- Affects only the textual representation of the version string, not its underlying value or meaning.\n\n**Implementation guidance**\n- Use this property to enforce uniform casing for version strings, particularly when interacting with case-sensitive systems or APIs.\n- Validate that the value assigned is a boolean (true or false).\n- Define a default behavior (commonly false) when the property is not explicitly set.\n- Apply the lowercase transformation early in the processing pipeline to maintain consistency.\n- Ensure that downstream components respect the transformed casing if this property is enabled.\n\n**Examples**\n- `true` — The version string \"V1.0\" is converted and output as \"v1.0\".\n- `false` — The version string \"V1.0\" remains unchanged as \"V1.0\".\n- Property omitted — The version string casing remains as originally provided.\n\n**Important notes**\n- Altering the casing of the version string may impact integrations with external systems that are case-sensitive.\n- Confirm compatibility with all consumers of the version string before enabling this property.\n- This property does not modify the semantic meaning or version number, only its textual case.\n- Should be used consistently across all instances where the version string is handled to avoid discrepancies.\n\n**Dependency chain**\n- Requires a valid restlet version string to perform the lowercase transformation.\n- May be used in conjunction with other formatting or validation properties related to the restlet version.\n- The effect of this property should be considered when performing version comparisons or logging.\n\n**Technical details**\n- Implemented as a boolean flag controlling whether to apply a lowercase function to the version string.\n- The transformation typically involves invoking a standard string lowercase method/function.\n- Should be executed before any version string comparisons, storage, or output operations.\n- Does not affect the internal representation of the version beyond its string casing."}},"description":"restletVersion specifies the version of the NetSuite Restlet script to be used for the API call. This property determines which version of the Restlet script is invoked, ensuring compatibility and proper execution of the request. It allows precise control over which iteration of the Restlet logic is executed, facilitating version management and smooth transitions between script updates.\n\n**Field behavior**\n- Defines the specific version of the Restlet script to target for the API request.\n- Influences the behavior, output, and compatibility of the API response based on the selected script version.\n- Enables management of multiple Restlet script versions within the same NetSuite environment.\n- Ensures that the API call executes the intended logic corresponding to the specified version.\n- Helps prevent conflicts or errors arising from script changes or updates.\n\n**Implementation guidance**\n- Set this property to exactly match the version identifier of the deployed Restlet script in NetSuite.\n- Confirm that the specified version is properly deployed and active in the NetSuite account before use.\n- Adopt a clear and consistent versioning scheme (e.g., semantic versioning, date-based, or custom tags) to avoid ambiguity.\n- Update this property whenever switching to a newer or different Restlet script version to reflect the intended logic.\n- Validate the version string format to prevent malformed or unsupported values.\n- Coordinate version updates with deployment and testing processes to ensure smooth transitions.\n\n**Examples**\n- \"1.0\"\n- \"2.1\"\n- \"2023.1\"\n- \"v3\"\n- \"release-2024-06\"\n\n**Important notes**\n- Specifying an incorrect or non-existent version will cause the API call to fail or produce unexpected results.\n- Proper versioning supports backward compatibility and controlled feature rollouts.\n- Always verify the Restlet script version in the NetSuite environment before making API calls.\n- Version mismatches can lead to errors, data inconsistencies, or unsupported operations.\n- This property is critical for environments where multiple Restlet versions coexist.\n\n**Dependency chain**\n- Depends on the Restlet scripts deployed and versioned within the NetSuite environment.\n- Related to the authentication and authorization context of the API call, as permissions may vary by script version.\n- Works in conjunction with other NetSuite API properties such as scriptId and deploymentId to fully identify the target Restlet.\n- May interact with environment-specific configurations or feature flags tied to particular versions.\n\n**Technical details**\n- Typically represented as a string"},"criteria":{"type":"object","properties":{"type":{"type":"array","items":{"type":"object"},"description":"The type property specifies the category or classification of the criteria used within the NetSuite RESTlet API. It defines the nature or kind of the criteria being applied to filter or query data, enabling precise targeting of records based on their domain or entity type.\n\n**Field behavior**\n- Determines the specific category or classification of the criteria.\n- Influences how the criteria is interpreted and processed by the API.\n- Helps in filtering or querying data based on the defined type.\n- Typically expects a predefined set of values corresponding to valid criteria types.\n- Acts as a key discriminator that guides the API in applying the correct schema and validation rules for the criteria.\n- May affect the available fields and operators applicable within the criteria.\n\n**Implementation guidance**\n- Validate the input against the allowed set of criteria types to ensure correctness and prevent errors.\n- Use consistent, descriptive, and case-sensitive naming conventions for the type values as defined by NetSuite.\n- Ensure that the specified type aligns with the corresponding criteria structure and expected data fields.\n- Document all possible values and their meanings clearly for API consumers to facilitate correct usage.\n- Implement error handling to provide meaningful feedback when unsupported or invalid types are supplied.\n- Keep the list of valid types updated in accordance with changes in NetSuite API versions and account configurations.\n\n**Examples**\n- \"customer\" — to specify criteria related to customer records.\n- \"transaction\" — to filter based on transaction data such as sales orders or invoices.\n- \"item\" — to apply criteria on inventory or product items.\n- \"employee\" — to target employee-related data.\n- \"vendor\" — to filter vendor or supplier records.\n- \"customrecord_xyz\" — to specify criteria for a custom record type identified by its script ID.\n\n**Important notes**\n- The type value directly affects the behavior of the criteria and the resulting data set.\n- Incorrect or unsupported type values may lead to API errors, empty results, or unexpected behavior.\n- The set of valid types may vary depending on the NetSuite account configuration, customizations, and API version.\n- Always refer to the latest official NetSuite API documentation and your account’s schema for supported types.\n- Some types may require additional permissions or roles to access the corresponding data.\n- The type property is mandatory for criteria filtering to function correctly.\n\n**Dependency chain**\n- Depends on the overall criteria object structure within NetSuite.restlet.criteria.\n- Influences the available fields, operators, and values within the criteria definition"},"join":{"type":"string","description":"join: Specifies the criteria used to join related records or tables in a NetSuite RESTlet query, enabling the retrieval of data based on relationships between different record types.\n**Field behavior**\n- Defines the relationship or link between the primary record and a related record for filtering or data retrieval.\n- Determines how records from different tables are combined based on matching fields.\n- Supports nested joins to allow complex queries involving multiple related records.\n**Implementation guidance**\n- Use valid join names as defined in NetSuite’s schema or documentation for the specific record types.\n- Ensure the join criteria align with the intended relationship to avoid incorrect or empty query results.\n- Combine with appropriate filters on the joined records to refine query results.\n- Validate join paths to prevent errors during query execution.\n**Examples**\n- Joining a customer record to its related sales orders using \"salesOrder\" as the join.\n- Using \"item\" join to filter transactions based on item attributes.\n- Nested join example: joining from a sales order to its customer and then to the customer’s address.\n**Important notes**\n- Incorrect join names or paths can cause query failures or unexpected results.\n- Joins may impact query performance; use them judiciously.\n- Not all record types support all possible joins; consult NetSuite documentation.\n- Joins are case-sensitive and must match NetSuite’s API specifications exactly.\n**Dependency chain**\n- Depends on the base record type specified in the query.\n- Works in conjunction with filter criteria to refine results.\n- May depend on authentication and permissions to access related records.\n**Technical details**\n- Typically represented as a string or object indicating the join path.\n- Used within the criteria object of a RESTlet query payload.\n- Supports multiple levels of nesting for complex joins.\n- Must conform to NetSuite’s SuiteScript or RESTlet API join syntax and conventions."},"operator":{"type":"string","description":"operator: >\n  Specifies the comparison operator used to evaluate the criteria in a NetSuite RESTlet request.\n  This operator determines how the field value is compared against the specified criteria value(s) to filter or query records.\n  **Field behavior**\n  - Defines the type of comparison between a field and a value (e.g., equality, inequality, greater than).\n  - Influences the logic of the criteria evaluation in RESTlet queries.\n  - Supports various operators such as equals, not equals, greater than, less than, contains, etc.\n  **Implementation guidance**\n  - Use valid NetSuite-supported operators to ensure correct query behavior.\n  - Match the operator type with the data type of the field being compared (e.g., use numeric operators for numeric fields).\n  - Combine multiple criteria with appropriate logical operators if needed.\n  - Validate operator values to prevent errors in RESTlet execution.\n  **Examples**\n  - \"operator\": \"is\" (checks if the field value is equal to the specified value)\n  - \"operator\": \"isnot\" (checks if the field value is not equal to the specified value)\n  - \"operator\": \"greaterthan\" (checks if the field value is greater than the specified value)\n  - \"operator\": \"contains\" (checks if the field value contains the specified substring)\n  **Important notes**\n  - The operator must be compatible with the field type and the value provided.\n  - Incorrect operator usage can lead to unexpected query results or errors.\n  - Operators are case-sensitive and should match NetSuite's expected operator strings.\n  **Dependency chain**\n  - Depends on the field specified in the criteria to determine valid operators.\n  - Works in conjunction with the criteria value(s) to form a complete condition.\n  - May be combined with logical operators when multiple criteria are used.\n  **Technical details**\n  - Typically represented as a string value in the RESTlet criteria JSON object.\n  - Supported operators align with NetSuite's SuiteScript search operators.\n  - Must conform to the list of operators recognized by the NetSuite RESTlet API."},"searchValue":{"type":"object","description":"searchValue: The value used as the search criterion to filter results in the NetSuite RESTlet API. This value is matched against the specified search field to retrieve relevant records based on the search parameters provided.\n**Field behavior**\n- Acts as the primary input for filtering search results.\n- Supports various data types depending on the search field (e.g., string, number, date).\n- Used in conjunction with other search criteria to refine query results.\n- Can be a partial or full match depending on the search configuration.\n**Implementation guidance**\n- Ensure the value type matches the expected type of the search field.\n- Validate the input to prevent injection attacks or malformed queries.\n- Use appropriate encoding if the value contains special characters.\n- Combine with logical operators or additional criteria for complex searches.\n**Examples**\n- \"Acme Corporation\" for searching customer names.\n- 1001 for searching by internal record ID.\n- \"2024-01-01\" for searching records created on or after a specific date.\n- \"Pending\" for filtering records by status.\n**Important notes**\n- The effectiveness of the search depends on the accuracy and format of the searchValue.\n- Case sensitivity may vary based on the underlying NetSuite configuration.\n- Large or complex search values may impact performance.\n- Null or empty values may result in no filtering or return all records.\n**Dependency chain**\n- Depends on the searchField property to determine which field the searchValue applies to.\n- Works alongside searchOperator to define how the searchValue is compared.\n- Influences the results returned by the RESTlet endpoint.\n**Technical details**\n- Typically passed as a string in the API request payload.\n- May require serialization or formatting based on the API specification.\n- Integrated into the NetSuite search query logic on the server side.\n- Subject to NetSuite’s search limitations and indexing capabilities."},"searchValue2":{"type":"object","description":"searchValue2 is an optional property used to specify the second value in a search criterion within the NetSuite RESTlet API. It is typically used in conjunction with search operators that require two values, such as \"between\" or \"not between,\" to define a range or a pair of comparison values.\n\n**Field behavior**\n- Represents the second operand or value in a search condition.\n- Used primarily with operators that require two values (e.g., \"between\", \"not between\").\n- Optional field; may be omitted if the operator only requires a single value.\n- Works alongside searchValue (the first value) to form a complete search criterion.\n\n**Implementation guidance**\n- Ensure that searchValue2 is provided only when the selected operator requires two values.\n- Validate the data type of searchValue2 to match the expected type for the field being searched (e.g., date, number, string).\n- When using range-based operators, searchValue2 should represent the upper bound or second boundary of the range.\n- If the operator does not require a second value, omit this property to avoid errors.\n\n**Examples**\n- For a date range search: searchValue = \"2023-01-01\", searchValue2 = \"2023-12-31\" with operator \"between\".\n- For a numeric range: searchValue = 100, searchValue2 = 200 with operator \"between\".\n- For a \"not between\" operator: searchValue = 50, searchValue2 = 100.\n\n**Important notes**\n- Providing searchValue2 without a compatible operator may result in an invalid search query.\n- The data type and format of searchValue2 must be consistent with searchValue and the field being queried.\n- This property is ignored if the operator only requires a single value.\n- Proper validation and error handling should be implemented when processing this field.\n\n**Dependency chain**\n- Dependent on the \"operator\" property within the same search criterion.\n- Works in conjunction with \"searchValue\" to define the search condition.\n- Part of the \"criteria\" array or object in the NetSuite RESTlet search request.\n\n**Technical details**\n- Data type varies depending on the field being searched (string, number, date, etc.).\n- Typically serialized as a JSON property in the RESTlet request payload.\n- Must conform to the expected format for the field and operator to avoid API errors.\n- Used internally by NetSuite to construct the appropriate"},"formula":{"type":"string","description":"formula: >\n  A string representing a custom formula used to define criteria for filtering or querying data within the NetSuite RESTlet API.\n  This formula allows users to specify complex conditions using NetSuite's formula syntax, enabling advanced and flexible data retrieval.\n  **Field behavior**\n  - Accepts a formula expression as a string that defines custom filtering logic.\n  - Used to create dynamic and complex criteria beyond standard field-value comparisons.\n  - Evaluated by the NetSuite backend to filter records according to the specified logic.\n  - Can incorporate NetSuite formula functions, operators, and field references.\n  **Implementation guidance**\n  - Ensure the formula syntax complies with NetSuite's formula language and supported functions.\n  - Validate the formula string before submission to avoid runtime errors.\n  - Use this field when standard criteria fields are insufficient for the required filtering.\n  - Combine with other criteria fields as needed to build comprehensive queries.\n  **Examples**\n  - \"CASE WHEN {status} = 'Open' THEN 1 ELSE 0 END = 1\"\n  - \"TO_DATE({createddate}) >= TO_DATE('2023-01-01')\"\n  - \"NVL({amount}, 0) > 1000\"\n  **Important notes**\n  - Incorrect or invalid formulas may cause the API request to fail or return errors.\n  - The formula must be compatible with the context of the query and the fields available.\n  - Performance may be impacted if complex formulas are used extensively.\n  - Formula evaluation is subject to NetSuite's formula engine capabilities and limitations.\n  **Dependency chain**\n  - Depends on the availability of fields referenced within the formula.\n  - Works in conjunction with other criteria properties in the request.\n  - Requires understanding of NetSuite's formula syntax and functions.\n  **Technical details**\n  - Data type: string.\n  - Supports NetSuite formula syntax including SQL-like expressions and functions.\n  - Evaluated server-side during the processing of the RESTlet request.\n  - Must be URL-encoded if included in query parameters of HTTP requests."},"_id":{"type":"object","description":"Unique identifier for the record within the NetSuite system.\n**Field behavior**\n- Serves as the primary key to uniquely identify a specific record.\n- Immutable once the record is created.\n- Used to retrieve, update, or delete the corresponding record.\n**Implementation guidance**\n- Must be a valid NetSuite internal ID format, typically a string or numeric value.\n- Should be provided when querying or manipulating a specific record.\n- Avoid altering this value to maintain data integrity.\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"abcde12345\"\n**Important notes**\n- This ID is assigned by NetSuite and should not be generated manually.\n- Ensure the ID corresponds to an existing record to avoid errors.\n- When used in criteria, it filters the dataset to the exact record matching this ID.\n**Dependency chain**\n- Dependent on the existence of the record in NetSuite.\n- Often used in conjunction with other criteria fields for precise querying.\n**Technical details**\n- Typically represented as a string or integer data type.\n- Used in RESTlet scripts as part of the criteria object to specify the target record.\n- May be included in URL parameters or request bodies depending on the API design."}},"description":"criteria: >\n  Defines the set of conditions or filters used to specify which records should be retrieved or affected by the NetSuite RESTlet operation. This property enables clients to precisely narrow down the dataset by applying one or more criteria based on record fields, comparison operators, and values, supporting complex logical combinations to tailor the query results.\n\n  **Field behavior**\n  - Accepts a structured object or an array representing one or multiple filtering conditions.\n  - Supports logical operators such as AND, OR, and nested groupings to combine multiple criteria flexibly.\n  - Each criterion typically includes a field name, an operator (e.g., equals, contains, greaterThan), and a value or set of values.\n  - Enables filtering on various data types including strings, numbers, dates, and booleans.\n  - Used to limit the scope of data returned or manipulated by the RESTlet to only those records that meet the specified conditions.\n  - When omitted or empty, the RESTlet may return all records or apply default filtering behavior as defined by the implementation.\n\n  **Implementation guidance**\n  - Validate the criteria structure rigorously to ensure it conforms to the expected schema before processing.\n  - Support nested criteria groups to allow complex and hierarchical filtering logic.\n  - Map criteria fields and operators accurately to corresponding NetSuite record fields and search operators, considering data types and operator compatibility.\n  - Handle empty or undefined criteria gracefully by returning all records or applying sensible default filters.\n  - Sanitize all input values to prevent injection attacks, malformed queries, or unexpected behavior.\n  - Provide clear error messages when criteria are invalid or unsupported.\n  - Optimize query performance by translating criteria into efficient NetSuite search queries.\n\n  **Examples**\n  - A single criterion filtering records where status equals \"Open\":\n    `{ \"field\": \"status\", \"operator\": \"equals\", \"value\": \"Open\" }`\n  - Multiple criteria combined with AND logic:\n    `[{\"field\": \"status\", \"operator\": \"equals\", \"value\": \"Open\"}, {\"field\": \"priority\", \"operator\": \"greaterThan\", \"value\": 2}]`\n  - Nested criteria combining OR and AND:\n    `{ \"operator\": \"OR\", \"criteria\": [ {\"field\": \"status\", \"operator\": \"equals\", \"value\": \"Open\"}, { \"operator\": \"AND\", \"criteria\": [ {\"field\": \"priority\", \"operator\": \"greaterThan\", \"value\":"},"columns":{"type":"object","properties":{"type":{"type":"array","items":{"type":"object"},"description":"The type property specifies the data type of the column in the NetSuite RESTlet response. It defines how the data in the column should be interpreted, validated, and handled by the client application to ensure accurate processing and display.\n\n**Field behavior**\n- Indicates the specific data type of the column (e.g., string, integer, date).\n- Determines the format, validation rules, and parsing logic applied to the column data.\n- Guides client applications in correctly interpreting and processing the data.\n- Influences data presentation, transformation, and serialization in the user interface or downstream systems.\n- Helps enforce data consistency and integrity across different components consuming the API.\n\n**Implementation guidance**\n- Use standardized data type names consistent with NetSuite’s native data types and conventions.\n- Ensure the specified type accurately reflects the actual data returned in the column to prevent parsing or runtime errors.\n- Support and validate common data types such as text (string), number (integer, float), date, datetime, boolean, and currency.\n- Validate the type value against a predefined, documented list of acceptable types to maintain consistency.\n- Clearly document any custom or extended data types if they are introduced beyond standard NetSuite types.\n- Consider locale and formatting standards (e.g., ISO 8601 for dates) when defining and interpreting types.\n\n**Examples**\n- \"string\" — for textual or alphanumeric data.\n- \"integer\" — for whole numeric values without decimals.\n- \"float\" — for numeric values with decimals (if supported).\n- \"date\" — for date values without time components, formatted as YYYY-MM-DD.\n- \"datetime\" — for combined date and time values, typically in ISO 8601 format.\n- \"boolean\" — for true/false or yes/no values.\n- \"currency\" — for monetary values, often including currency symbols or codes.\n\n**Important notes**\n- The type property is essential for ensuring data integrity and enabling correct client-side processing and validation.\n- Incorrect or mismatched type specifications can lead to data misinterpretation, parsing failures, or runtime errors.\n- Some data types require strict formatting standards (e.g., ISO 8601 for date and datetime) to ensure interoperability.\n- This property is typically mandatory for each column to guarantee predictable behavior.\n- Changes to the type property should be managed carefully to avoid breaking existing integrations.\n\n**Dependency chain**\n- Depends on the actual data returned by the NetSuite RESTlet for the column.\n- Influences"},"join":{"type":"string","description":"join: Specifies the join relationship to be used when retrieving or manipulating data through the NetSuite RESTlet API. This property defines how related records are linked together, enabling the inclusion of fields from associated records in the query or operation.\n\n**Field behavior**\n- Determines the type of join between the primary record and related records.\n- Enables access to fields from related records by specifying the join path.\n- Influences the scope and depth of data retrieved or affected by the API call.\n- Supports nested joins to traverse multiple levels of related records.\n\n**Implementation guidance**\n- Use valid join names as defined in the NetSuite schema for the specific record type.\n- Ensure the join relationship exists and is supported by the RESTlet endpoint.\n- Combine with column definitions to specify which fields from the joined records to include.\n- Validate join paths to prevent errors or unexpected results in data retrieval.\n- Consider performance implications when using multiple or complex joins.\n\n**Examples**\n- \"customer\" to join the transaction record with the related customer record.\n- \"item\" to join a sales order with the associated item records.\n- \"employee.manager\" to join an employee record with their manager's record.\n- \"vendor\" to join a purchase order with the vendor record.\n\n**Important notes**\n- Incorrect or unsupported join names will result in API errors.\n- Joins are case-sensitive and must match the exact join names defined in NetSuite.\n- Not all record types support all join relationships.\n- The join property works in conjunction with the columns property to specify which fields to retrieve.\n- Using joins may increase the complexity and execution time of the API call.\n\n**Dependency chain**\n- Depends on the base record type being queried or manipulated.\n- Works together with the columns property to define the data structure.\n- May depend on user permissions to access related records.\n- Influences the structure of the response payload.\n\n**Technical details**\n- Represented as a string indicating the join path.\n- Supports dot notation for nested joins (e.g., \"employee.manager\").\n- Used in RESTlet scripts to customize data retrieval.\n- Must conform to NetSuite's internal join naming conventions.\n- Typically included in the columns array objects to specify joined fields."},"summary":{"type":"object","properties":{"type":{"type":"string","description":"Type of the summary column, indicating the aggregation or calculation applied to the data in this column.\n**Field behavior**\n- Specifies the kind of summary operation performed on the column data, such as sum, count, average, minimum, or maximum.\n- Determines how the data in the column is aggregated or summarized in the report or query result.\n- Influences the output format and the meaning of the values in the summary column.\n**Implementation guidance**\n- Use predefined summary types supported by the NetSuite RESTlet API to ensure compatibility.\n- Validate the type value against allowed summary operations to prevent errors.\n- Ensure that the summary type is appropriate for the data type of the column (e.g., sum for numeric fields).\n- Document the summary type clearly to aid users in understanding the aggregation applied.\n**Examples**\n- \"SUM\" — calculates the total sum of the values in the column.\n- \"COUNT\" — counts the number of entries or records.\n- \"AVG\" — computes the average value.\n- \"MIN\" — finds the minimum value.\n- \"MAX\" — finds the maximum value.\n**Important notes**\n- The summary type must be supported by the underlying NetSuite system to function correctly.\n- Incorrect summary types may lead to errors or misleading data in reports.\n- Some summary types may not be applicable to certain data types (e.g., average on text fields).\n**Dependency chain**\n- Depends on the column data type to determine valid summary types.\n- Interacts with the overall report or query configuration to produce summarized results.\n- May affect downstream processing or display logic based on the summary output.\n**Technical details**\n- Typically represented as a string value corresponding to the summary operation.\n- Mapped internally to NetSuite’s summary functions in saved searches or reports.\n- Case-insensitive but recommended to use uppercase for consistency.\n- Must conform to the enumeration of allowed summary types defined by the API."},"enum":{"type":"object","description":"enum: >\n  Specifies the set of predefined constant values that the property can take, representing an enumeration.\n  **Field behavior**\n  - Defines a fixed list of allowed values for the property.\n  - Restricts the property's value to one of the enumerated options.\n  - Used to enforce data integrity and consistency.\n  **Implementation guidance**\n  - Enumerated values should be clearly defined and documented.\n  - Use meaningful and descriptive names for each enum value.\n  - Ensure the enum list is exhaustive for the intended use case.\n  - Validate input against the enum values to prevent invalid data.\n  **Examples**\n  - [\"Pending\", \"Approved\", \"Rejected\"]\n  - [\"Small\", \"Medium\", \"Large\"]\n  - [\"Red\", \"Green\", \"Blue\"]\n  **Important notes**\n  - Enum values are case-sensitive unless otherwise specified.\n  - Adding or removing enum values may impact backward compatibility.\n  - Enum should be used when the set of possible values is known and fixed.\n  **Dependency chain**\n  - Typically used in conjunction with the property type (e.g., string or integer).\n  - May influence validation logic and UI dropdown options.\n  **Technical details**\n  - Represented as an array of strings or numbers defining allowed values.\n  - Often implemented as a constant or static list in code.\n  - Used by client and server-side validation mechanisms."},"lowercase":{"type":"object","description":"A boolean property indicating whether the summary column values should be converted to lowercase.\n\n**Field behavior**\n- When set to true, all text values in the summary column are transformed to lowercase.\n- When set to false or omitted, the original casing of the summary column values is preserved.\n- Primarily affects string-type summary columns; non-string values remain unaffected.\n\n**Implementation guidance**\n- Use this property to normalize text data for consistent processing or comparison.\n- Ensure that the transformation to lowercase does not interfere with case-sensitive data requirements.\n- Apply this setting during data retrieval or before output formatting in the RESTlet response.\n\n**Examples**\n- `lowercase: true` — converts \"Example Text\" to \"example text\".\n- `lowercase: false` — retains \"Example Text\" as is.\n- Property omitted — defaults to no case transformation.\n\n**Important notes**\n- This property only affects the summary columns specified in the RESTlet response.\n- It does not modify the underlying data in NetSuite, only the output representation.\n- Use with caution when case sensitivity is important for downstream processing.\n\n**Dependency chain**\n- Depends on the presence of summary columns in the RESTlet response.\n- May interact with other formatting or transformation properties applied to columns.\n\n**Technical details**\n- Implemented as a boolean flag within the summary column configuration.\n- Transformation is applied at the data serialization stage before sending the response.\n- Compatible with string data types; other data types bypass this transformation."}},"description":"A brief textual summary providing an overview or key points related to the item or record represented by this property. This summary is intended to give users a quick understanding without needing to delve into detailed data.\n\n**Field behavior**\n- Contains concise, human-readable text summarizing the main aspects or highlights of the associated record.\n- Typically used in list views, reports, or previews where a quick reference is needed.\n- May be displayed in user interfaces to provide context or a snapshot of the data.\n- Can be optional or mandatory depending on the specific API implementation.\n\n**Implementation guidance**\n- Ensure the summary is clear, concise, and informative, avoiding overly technical jargon unless appropriate.\n- Limit the length to a reasonable number of characters to maintain readability in UI components.\n- Sanitize input to prevent injection of malicious content if the summary is user-generated.\n- Support localization if the API serves multi-lingual environments.\n- Update the summary whenever the underlying data it represents changes to keep it accurate.\n\n**Examples**\n- \"Quarterly sales report showing a 15% increase in revenue.\"\n- \"Customer profile summary including recent orders and contact info.\"\n- \"Inventory item overview highlighting stock levels and reorder status.\"\n- \"Project status summary indicating milestones achieved and pending tasks.\"\n\n**Important notes**\n- The summary should not contain sensitive or confidential information unless properly secured.\n- It is distinct from detailed descriptions or full data fields; it serves as a high-level overview.\n- Consistency in style and format across summaries improves user experience.\n- May be truncated in some UI contexts; design summaries to convey essential information upfront.\n\n**Dependency chain**\n- Often derived from or related to other detailed fields within the record.\n- May influence or be influenced by display components or reporting modules consuming this property.\n- Could be linked to user permissions determining visibility of summary content.\n\n**Technical details**\n- Data type is typically a string.\n- May have a maximum length constraint depending on API or database schema.\n- Stored and transmitted as plain text; consider encoding if special characters are used.\n- Accessible via the RESTlet API under the path `netsuite.restlet.columns.summary`."},"formula":{"type":"string","description":"formula: >\n  A string representing a custom formula used to calculate or derive values dynamically within the context of the NetSuite RESTlet columns. This formula can include field references, operators, and functions supported by NetSuite's formula syntax to perform computations or conditional logic on record data.\n\n  **Field behavior:**\n  - Accepts a formula expression as a string that defines how to compute the column's value.\n  - Can reference other fields, constants, and use NetSuite-supported functions and operators.\n  - Evaluated at runtime to produce dynamic results based on the current record data.\n  - Used primarily in saved searches, reports, or RESTlet responses to customize output.\n  \n  **Implementation guidance:**\n  - Ensure the formula syntax complies with NetSuite's formula language and supported functions.\n  - Validate the formula string to prevent errors during execution.\n  - Use field IDs or aliases correctly within the formula to reference data fields.\n  - Test formulas thoroughly in NetSuite UI before deploying via RESTlet to ensure correctness.\n  - Consider performance implications of complex formulas on large datasets.\n  \n  **Examples:**\n  - \"CASE WHEN {status} = 'Open' THEN 1 ELSE 0 END\" — returns 1 if status is Open, else 0.\n  - \"NVL({amount}, 0) * 0.1\" — calculates 10% of the amount, treating null as zero.\n  - \"TO_CHAR({trandate}, 'YYYY-MM-DD')\" — formats the transaction date as a string.\n  \n  **Important notes:**\n  - The formula must be compatible with the context in which it is used (e.g., search column, RESTlet).\n  - Incorrect formulas can cause runtime errors or unexpected results.\n  - Some functions or operators may not be supported depending on the NetSuite version or API context.\n  - Formula evaluation respects user permissions and data visibility.\n  \n  **Dependency chain:**\n  - Depends on the availability of referenced fields within the record or search context.\n  - Relies on NetSuite's formula parsing and evaluation engine.\n  - Interacts with the RESTlet execution environment to produce output.\n  \n  **Technical details:**\n  - Data type: string containing a formula expression.\n  - Supports NetSuite formula syntax including SQL-like CASE statements, arithmetic operations, and built-in functions.\n  - Evaluated server-side during RESTlet execution or saved search processing.\n  - Must be URL-"},"label":{"type":"string","description":"label: |\n  The display name or title of the column as it appears in the user interface or reports.\n  **Field behavior**\n  - Represents the human-readable name for a column in a dataset or report.\n  - Used to identify the column in UI elements such as tables, forms, or export files.\n  - Should be concise yet descriptive enough to convey the column’s content.\n  **Implementation guidance**\n  - Ensure the label is localized if the application supports multiple languages.\n  - Avoid using technical jargon; prefer user-friendly terminology.\n  - Keep the label length reasonable to prevent UI truncation.\n  - Update the label consistently when the underlying data or purpose changes.\n  **Examples**\n  - \"Customer Name\"\n  - \"Invoice Date\"\n  - \"Total Amount\"\n  - \"Status\"\n  **Important notes**\n  - The label does not affect the data or the column’s functionality; it is purely for display.\n  - Changing the label does not impact data processing or storage.\n  - Labels should be unique within the same context to avoid confusion.\n  **Dependency chain**\n  - Depends on the column definition within the dataset or report configuration.\n  - May be linked to localization resources if internationalization is supported.\n  **Technical details**\n  - Typically a string data type.\n  - May support Unicode characters for internationalization.\n  - Stored as metadata associated with the column definition in the system."},"sort":{"type":"boolean","description":"sort: >\n  Specifies the sorting order for the column values in the query results.\n  **Field behavior**\n  - Determines the order in which the data is returned based on the column values.\n  - Accepts values that indicate ascending or descending order.\n  - Influences how the dataset is organized before being returned by the API.\n  **Implementation guidance**\n  - Use standardized values such as \"asc\" for ascending and \"desc\" for descending.\n  - Ensure the sort parameter corresponds to a valid column in the dataset.\n  - Multiple sort parameters may be supported to define secondary sorting criteria.\n  - Validate the sort value to prevent errors or unexpected behavior.\n  **Examples**\n  - \"asc\" to sort the column values in ascending order.\n  - \"desc\" to sort the column values in descending order.\n  **Important notes**\n  - Sorting can impact performance, especially on large datasets.\n  - If no sort parameter is provided, the default sorting behavior of the API applies.\n  - Sorting is case-insensitive.\n  **Dependency chain**\n  - Depends on the column specified in the query or request.\n  - May interact with pagination parameters to determine the final data output.\n  **Technical details**\n  - Typically implemented as a string value in the API request.\n  - May be part of a query string or request body depending on the API design.\n  - Sorting logic is handled server-side before data is returned to the client."}},"description":"columns: >\n  Specifies the set of columns (fields) to be retrieved or manipulated in the NetSuite RESTlet operation. This property defines which specific fields from the records should be included in the response or used during processing, enabling precise control over the data returned or affected. By selecting only relevant columns, it helps optimize performance and reduce payload size, ensuring efficient data handling tailored to the operation’s requirements.\n\n  **Field behavior**\n  - Determines the exact fields (columns) to be included in data retrieval, update, or manipulation operations.\n  - Supports specifying multiple columns to customize the dataset returned or processed.\n  - Limits the data payload by including only the specified columns, improving performance and reducing bandwidth.\n  - Influences the structure, content, and size of the response from the RESTlet.\n  - If omitted, defaults to retrieving all available columns for the target record type, which may impact performance.\n  - Columns specified must be valid and accessible for the target record type to avoid errors.\n\n  **Implementation guidance**\n  - Accepts an array or list of column identifiers, which can be simple strings or objects with detailed specifications (e.g., `{ name: \"fieldname\" }`).\n  - Column identifiers should correspond exactly to valid NetSuite record field names or internal IDs.\n  - Validate column names against the target record schema before execution to prevent runtime errors.\n  - Use this property to optimize RESTlet calls by limiting data to only necessary fields, especially in large datasets.\n  - When specifying complex columns (e.g., joined fields or formula fields), ensure the correct syntax and structure are used.\n  - Consider the permissions and roles associated with the RESTlet user to ensure access to the specified columns.\n\n  **Examples**\n  - `[\"internalid\", \"entityid\", \"email\"]` — retrieves basic identifying and contact fields.\n  - `[ { name: \"internalid\" }, { name: \"entityid\" }, { name: \"email\" } ]` — object notation for specifying columns.\n  - `[\"tranid\", \"amount\", \"status\"]` — retrieves transaction-specific fields.\n  - `[ { name: \"custbody_custom_field\" }, { name: \"createddate\" } ]` — includes custom and system fields.\n  - `[\"item\", \"quantity\", \"rate\"]` — fields relevant to item records or line items.\n\n  **Important notes**\n  - Omitting the `columns` property typically"},"markExportedBatchSize":{"type":"object","properties":{"type":{"type":"number","description":"type: >\n  Specifies the data type of the `markExportedBatchSize` property, defining the kind of value it accepts or represents. This property is crucial for ensuring that the batch size value is correctly interpreted, validated, and processed by the API. It dictates how the value is serialized and deserialized during API communication, thereby maintaining data integrity and consistency across different system components.\n  **Field behavior**\n  - Determines the expected format and constraints of the `markExportedBatchSize` value.\n  - Influences validation rules applied to the batch size input to prevent invalid data.\n  - Guides serialization and deserialization processes for accurate data exchange.\n  - Ensures compatibility with client and server-side processing logic.\n  **Implementation guidance**\n  - Must be assigned a valid and recognized data type within the API schema, such as \"integer\" or \"string\".\n  - Should align precisely with the nature of the batch size value to avoid type mismatches.\n  - Implement strict validation checks to confirm the value conforms to the specified type before processing.\n  - Consider the implications of the chosen type on downstream processing and storage.\n  **Examples**\n  - `\"integer\"` — indicating the batch size is represented as a whole number.\n  - `\"string\"` — if the batch size is provided as a textual representation.\n  - `\"number\"` — for numeric values that may include decimals (less common for batch sizes).\n  **Important notes**\n  - The `type` must consistently reflect the actual data format of `markExportedBatchSize` to prevent runtime errors.\n  - Mismatched or incorrect type declarations can cause API failures, data corruption, or unexpected behavior.\n  - Changes to this property’s type should be carefully managed to maintain backward compatibility.\n  **Dependency chain**\n  - Directly defines the data handling of the `markExportedBatchSize` property.\n  - Affects validation logic and error handling in API endpoints related to batch processing.\n  - May impact client applications that consume or provide this property’s value.\n  **Technical details**\n  - Corresponds to standard JSON data types such as integer, string, boolean, etc.\n  - Utilized by the API framework to enforce type safety, ensuring data integrity during request and response cycles.\n  - Plays a role in schema validation tools and automated documentation generation.\n  - Influences serialization libraries in encoding and decoding the property value correctly."},"cLocked":{"type":"object","description":"cLocked indicates whether the batch size setting for marking exports is locked, preventing any modifications by users or automated processes. This property serves as a control mechanism to safeguard critical configuration parameters related to export batch processing.\n\n**Field behavior**\n- Represents a boolean flag that determines if the batch size configuration is immutable.\n- When set to true, the batch size cannot be altered via the user interface, API calls, or automated scripts.\n- When set to false, the batch size remains configurable and can be adjusted as operational needs evolve.\n- Changes to this flag directly affect the ability to update the batch size setting.\n\n**Implementation guidance**\n- Utilize this flag to enforce configuration stability and prevent accidental or unauthorized changes to batch size settings.\n- Validate this flag before processing any update requests to the batch size to ensure compliance.\n- Typically managed by system administrators or during initial system setup to lock down critical parameters.\n- Incorporate audit logging when this flag is changed to maintain traceability.\n- Consider integrating with role-based access controls to restrict who can toggle this flag.\n\n**Examples**\n- cLocked: true — The batch size setting is locked, disallowing any modifications.\n- cLocked: false — The batch size setting is unlocked and can be updated as needed.\n\n**Important notes**\n- Locking the batch size helps maintain consistent export throughput and prevents performance degradation caused by unintended configuration changes.\n- Modifications to this flag should be performed cautiously and ideally under change management procedures.\n- This property is only applicable in environments where batch size configuration for marking exports is relevant.\n- Ensure that dependent systems or processes respect this lock to avoid configuration conflicts.\n\n**Dependency chain**\n- Dependent on the presence of the markExportedBatchSize configuration object within the system.\n- Interacts with user permission settings and roles that govern configuration management capabilities.\n- May affect downstream export processing workflows that rely on batch size parameters.\n\n**Technical details**\n- Data type: Boolean.\n- Default value is false, indicating the batch size is unlocked unless explicitly locked.\n- Persisted as part of the NetSuite.restlet.markExportedBatchSize configuration object.\n- Changes to this property should trigger validation and possibly system notifications to administrators."},"min":{"type":"object","description":"Minimum number of records to process in a single batch during the markExported operation in the NetSuite RESTlet integration. This property sets the lower boundary for batch sizes, ensuring that each batch contains at least this number of records before processing begins. It plays a crucial role in balancing processing efficiency and system resource utilization by controlling the granularity of batch operations.\n\n**Field behavior**\n- Defines the lower limit for the batch size when processing records in the markExported operation.\n- Ensures that each batch contains at least this minimum number of records before processing.\n- Helps optimize performance by preventing excessively small batches that could increase overhead.\n- Works in conjunction with the 'max' batch size to establish a valid range for batch processing.\n- Influences how the system partitions large datasets into manageable chunks for processing.\n\n**Implementation guidance**\n- Choose a value based on system capabilities, expected record complexity, and API rate limits to avoid timeouts or throttling.\n- Ensure this value is a positive integer greater than zero.\n- Must be less than or equal to the corresponding 'max' batch size to maintain logical consistency.\n- Test different values to find an optimal balance between processing speed and resource consumption.\n- Consider the impact on downstream systems and network latency when setting this value.\n\n**Examples**\n- 10 (process at least 10 records per batch)\n- 50 (process at least 50 records per batch)\n- 100 (process at least 100 records per batch for high-throughput scenarios)\n\n**Important notes**\n- Setting this value too low may lead to inefficient processing due to increased overhead from handling many small batches.\n- Setting this value too high may cause processing delays, timeouts, or exceed API rate limits.\n- Always use in conjunction with the 'max' batch size to define a valid and effective batch size range.\n- Changes to this value should be tested in a staging environment before production deployment to assess impact.\n\n**Dependency chain**\n- Directly related to 'netsuite.restlet.markExportedBatchSize.max', which defines the upper limit of batch size.\n- Utilized within the batch processing logic of the markExported operation in the NetSuite RESTlet integration.\n- Influences and is influenced by system performance parameters and API constraints.\n\n**Technical details**\n- Data type: Integer\n- Must be a positive integer greater than zero\n- Should be validated at configuration time to ensure it does not exceed the 'max' batch size"},"max":{"type":"object","description":"Maximum number of records to process in a single batch during the markExported operation, defining the upper limit for batch size to balance performance and resource utilization effectively.\n\n**Field behavior**\n- Specifies the maximum count of records processed in one batch during the markExported operation.\n- Controls the batch size to optimize throughput while preventing system overload.\n- Helps manage memory usage and processing time by limiting batch volume.\n- Directly affects the frequency and size of API calls or processing cycles.\n\n**Implementation guidance**\n- Determine an optimal value based on system capacity, performance benchmarks, and typical workload.\n- Ensure the value complies with any API or platform-imposed batch size limits.\n- Consider network conditions, processing latency, and error handling when setting the batch size.\n- Validate that the input is a positive integer and handle invalid values gracefully.\n- Adjust dynamically if possible, based on runtime metrics or error feedback.\n\n**Examples**\n- 1000: Processes up to 1000 records per batch, suitable for balanced performance.\n- 500: Smaller batch size for environments with limited resources or higher reliability needs.\n- 2000: Larger batch size for high-throughput scenarios where system resources allow.\n- 50: Very small batch size for testing or debugging purposes.\n\n**Important notes**\n- Excessively high values may lead to timeouts, memory exhaustion, or degraded system responsiveness.\n- Very low values can increase overhead due to more frequent batch processing cycles.\n- This parameter is critical for tuning the performance and stability of the markExported operation.\n- Changes to this value should be tested in a controlled environment before production deployment.\n\n**Dependency chain**\n- Integral to the batch processing logic within the markExported operation.\n- Interacts with system-level batch size constraints and API rate limits.\n- Influences how records are chunked and iterated during export marking.\n- May affect downstream processing components that consume batch outputs.\n\n**Technical details**\n- Must be an integer value greater than zero.\n- Typically configured via API request parameters or system configuration files.\n- Should be compatible with the data processing framework and any middleware handling batch operations.\n- May require synchronization with other batch-related settings to ensure consistency."}},"description":"markExportedBatchSize: The number of records to process in each batch when marking records as exported in NetSuite via the RESTlet API. This setting controls how many records are updated in a single API call to optimize performance and resource usage during the export marking process.\n\n**Field behavior**\n- Determines the size of each batch of records to be marked as exported in NetSuite.\n- Controls the number of records processed per RESTlet API call for export status updates.\n- Directly impacts the throughput and efficiency of the export marking operation.\n- Influences the balance between processing speed and system resource consumption.\n- Helps manage API rate limits by controlling the volume of records processed per request.\n\n**Implementation guidance**\n- Select a batch size that balances efficient processing with system stability and API constraints.\n- Avoid excessively large batch sizes to prevent API timeouts, memory exhaustion, or throttling.\n- Consider the typical volume of records to be exported and the performance characteristics of your NetSuite environment.\n- Test various batch sizes under realistic load conditions to identify the optimal value.\n- Monitor API response times and error rates to adjust the batch size dynamically if needed.\n- Ensure compatibility with any rate limiting or concurrency restrictions imposed by the NetSuite RESTlet API.\n\n**Examples**\n- Setting `markExportedBatchSize` to 100 processes 100 records per batch, suitable for moderate workloads.\n- Using a batch size of 500 may be appropriate for high-volume exports on systems with robust resources.\n- A smaller batch size like 50 can help avoid API throttling or timeouts in environments with limited resources or strict rate limits.\n- Adjusting the batch size to 200 after observing API latency improvements the overall export marking throughput.\n\n**Important notes**\n- The batch size setting directly affects the speed, reliability, and resource utilization of marking records as exported.\n- Incorrect batch size configurations can cause partial updates, failed API calls, or increased processing times.\n- This property is specific to the RESTlet-based integration with NetSuite and does not apply to other export mechanisms.\n- Changes to this setting should be tested thoroughly to avoid unintended disruptions in the export workflow.\n- Consider the impact on downstream processes that depend on timely and accurate export status updates.\n\n**Dependency chain**\n- Depends on the RESTlet API endpoint responsible for marking records as exported in NetSuite.\n- Influenced by NetSuite API rate limits, timeout settings, and system performance characteristics.\n- Works in conjunction with other export configuration parameters"},"TODO":{"type":"object","description":"TODO: A placeholder property used to indicate tasks, features, or sections within the NetSuite RESTlet integration that require implementation, completion, or further development. This property functions as a clear marker for developers and project managers to identify areas that are pending work, ensuring that these tasks are tracked and addressed before finalizing the API. It is not intended to hold any functional data or be part of the production API contract until fully implemented.\n\n**Field behavior**\n- Serves as a temporary indicator for incomplete, pending, or planned tasks within the API schema.\n- Does not contain operational data or affect API functionality until properly defined and implemented.\n- Helps track development progress and highlight areas needing attention during the development lifecycle.\n- Should be removed or replaced with finalized implementations once the associated task is completed.\n- May be used to generate reports or dashboards reflecting outstanding development work.\n\n**Implementation guidance**\n- Utilize the TODO property to explicitly flag API sections requiring further coding, configuration, or review.\n- Accompany TODO entries with detailed comments or references to issue tracking systems (e.g., JIRA, GitHub Issues) for clarity and traceability.\n- Establish regular review cycles to update, resolve, or remove TODO properties to maintain an accurate representation of development status.\n- Avoid deploying TODO properties in production environments to prevent confusion, incomplete features, or potential runtime errors.\n- Integrate TODO tracking with project management workflows to ensure timely resolution.\n\n**Examples**\n- TODO: Implement OAuth 2.0 authentication mechanism for RESTlet endpoints.\n- TODO: Add comprehensive validation rules for input parameters to ensure data integrity.\n- TODO: Complete error handling and logging for data retrieval failures.\n- TODO: Optimize response payload size for improved performance.\n- TODO: Integrate unit tests covering all new RESTlet functionalities.\n\n**Important notes**\n- The presence of TODO properties signifies incomplete or provisional functionality and should not be interpreted as finalized API features.\n- Unresolved TODO items can lead to partial implementations, unexpected behavior, or runtime errors if not addressed before release.\n- Effective management and timely resolution of TODO properties are critical for maintaining code quality, project timelines, and overall system stability.\n- TODO properties should be clearly documented and communicated within the development team to avoid oversight.\n\n**Dependency chain**\n- TODO properties may depend on other modules, services, or API components that are under development or pending integration.\n- Often linked to external project management or issue tracking tools for assignment, prioritization, and progress monitoring.\n- The"},"hooks":{"type":"object","properties":{"batchSize":{"type":"number","description":"batchSize specifies the number of records or items to be processed in a single batch during the execution of the NetSuite RESTlet hook. This parameter helps control the workload size for each batch operation, optimizing performance and resource utilization by balancing processing efficiency and system constraints.\n\n**Field behavior**\n- Determines the maximum number of records or items processed in one batch cycle.\n- Directly influences the frequency and duration of batch processing operations.\n- Helps manage memory consumption and processing time by limiting the batch workload.\n- Affects overall throughput and latency of batch operations, impacting system responsiveness.\n- Controls how data is segmented and processed in discrete units during RESTlet execution.\n\n**Implementation guidance**\n- Configure batchSize based on the system’s processing capacity, expected data volume, and performance goals.\n- Use smaller batch sizes in environments with limited resources or strict execution time limits to prevent timeouts.\n- Larger batch sizes can improve throughput by reducing the number of batch cycles but may increase individual batch processing time and risk of hitting governance limits.\n- Always validate that batchSize is a positive integer greater than zero to ensure proper operation.\n- Take into account NetSuite API governance limits, such as usage units and execution time, when determining batchSize.\n- Monitor system performance and adjust batchSize dynamically if possible to optimize processing efficiency.\n- Ensure batchSize aligns with other batch-related configurations to maintain consistency and predictable behavior.\n\n**Examples**\n- batchSize: 100 — processes 100 records per batch, balancing throughput and resource use.\n- batchSize: 500 — processes 500 records per batch for higher throughput in robust environments.\n- batchSize: 10 — processes 10 records per batch for fine-grained control and minimal resource impact.\n- batchSize: 1 — processes records individually, useful for debugging or very resource-sensitive scenarios.\n\n**Important notes**\n- Excessively high batchSize values may cause processing timeouts, exceed NetSuite governance limits, or lead to memory exhaustion.\n- Very low batchSize values can result in inefficient processing due to increased overhead and more frequent batch invocations.\n- The optimal batchSize is context-dependent and should be determined through testing and monitoring.\n- batchSize should be consistent with other batch processing parameters to avoid conflicts or unexpected behavior.\n- Changes to batchSize may require adjustments in error handling and retry logic to accommodate different batch sizes.\n\n**Dependency chain**\n- Depends on the batch processing logic implemented within the RESTlet hook.\n- Influences and is influenced by"},"preSend":{"type":"object","properties":{"fileInternalId":{"type":"string","description":"fileInternalId: The unique internal identifier assigned to a file within the NetSuite system. This identifier is used to precisely reference and manipulate a specific file during API operations, particularly within pre-send processing hooks in RESTlets. It ensures accurate targeting of file resources by uniquely identifying files stored in the NetSuite file cabinet.\n\n**Field behavior**\n- Represents a unique numeric or alphanumeric identifier assigned by NetSuite to each file.\n- Used to retrieve, update, or reference a file during the preSend hook execution.\n- Must correspond to an existing file within the NetSuite file cabinet.\n- Immutable throughout the file’s lifecycle; remains constant unless the file is deleted and recreated.\n- Serves as a key reference for file-related operations in automated workflows and integrations.\n\n**Implementation guidance**\n- Always validate that the fileInternalId exists and is accessible before performing operations.\n- Use this ID to fetch file metadata, content, or perform updates within the preSend hook.\n- Implement error handling to manage cases where the fileInternalId does not correspond to a valid or accessible file.\n- Ensure that the executing user or integration has the necessary permissions to access the file referenced by this ID.\n- Avoid hardcoding this ID; retrieve dynamically when possible to maintain flexibility and accuracy.\n\n**Examples**\n- 12345\n- \"67890\"\n- \"file_98765\"\n\n**Important notes**\n- The fileInternalId is specific to each NetSuite account and environment; it is not globally unique across different accounts.\n- Do not expose this identifier publicly, as it may reveal sensitive internal system details.\n- Modifications to the file’s name, location, or metadata do not affect the internal ID.\n- This ID is essential for linking files reliably in automated processes, integrations, and RESTlet hooks.\n- Deleting and recreating a file will result in a new fileInternalId.\n\n**Dependency chain**\n- Depends on the existence of the file within the NetSuite file cabinet.\n- Requires appropriate permissions to access or manipulate the file.\n- Utilized within preSend hooks to reference files accurately during API operations.\n\n**Technical details**\n- Typically a numeric or alphanumeric string assigned by NetSuite upon file creation.\n- Stored internally within NetSuite’s database as the primary key for file records.\n- Used as a parameter in RESTlet API calls to identify and operate on specific files.\n- Immutable identifier that does not change unless the file is deleted and recreated.\n- Integral to"},"function":{"type":"string","description":"function: >\n  Specifies the name of the custom function to be invoked during the preSend hook phase in the NetSuite RESTlet integration. This function enables developers to implement custom logic for processing or modifying the request payload immediately before it is dispatched to the NetSuite RESTlet endpoint. It serves as a critical extension point for tailoring request data, adding headers, sanitizing inputs, or performing any preparatory steps necessary to meet integration requirements.\n\n  **Field behavior**\n  - Identifies the exact function to execute during the preSend hook phase.\n  - Allows customization and transformation of the outgoing request payload or context.\n  - The specified function is called synchronously or asynchronously depending on implementation support.\n  - Modifications made by this function directly affect the data sent to the NetSuite RESTlet.\n  - Must reference a valid, accessible function within the integration’s runtime environment.\n\n  **Implementation guidance**\n  - Confirm that the function name matches a defined and exported function within the integration codebase.\n  - The function should accept the current request payload or context as input and return the modified payload or context.\n  - Implement robust error handling within the function to avoid unhandled exceptions that could disrupt the request flow.\n  - Optimize the function for performance to minimize latency in request processing.\n  - If asynchronous operations are supported, ensure proper handling of promises or callbacks.\n  - Document the function’s behavior clearly to facilitate maintenance and future updates.\n\n  **Examples**\n  - \"sanitizePayload\" — cleans and validates request data before sending.\n  - \"addAuthenticationHeaders\" — injects necessary authentication tokens or headers.\n  - \"transformRequestData\" — restructures or enriches the payload to match API expectations.\n  - \"logRequestDetails\" — captures request metadata for auditing or debugging purposes.\n\n  **Important notes**\n  - The function must be correctly implemented and accessible; otherwise, runtime errors will occur.\n  - This hook executes immediately before the request is sent, so any changes here directly impact the outgoing data.\n  - Ensure that the function’s side effects do not unintentionally alter unrelated parts of the request or integration state.\n  - If asynchronous processing is used, verify that the integration framework supports it to avoid unexpected behavior.\n  - Testing the function thoroughly is critical to ensure reliable integration behavior.\n\n  **Dependency chain**\n  - Requires the preSend hook to be enabled and properly configured in the integration settings.\n  - Depends on the presence of the named"},"configuration":{"type":"object","description":"configuration: >\n  An object containing configuration settings that influence the behavior of the preSend hook in the NetSuite RESTlet integration. This object serves as a centralized control point for customizing how requests are processed and modified before being sent to the NetSuite RESTlet endpoint. It can include a variety of parameters such as authentication credentials, logging preferences, request modification flags, timeout settings, retry policies, and feature toggles that tailor the preSend hook’s operation to specific integration needs.\n  **Field behavior**\n  - Holds key-value pairs that define how the preSend hook processes and modifies outgoing requests.\n  - Can include settings such as authentication parameters (e.g., tokens, API keys), request modification flags (e.g., header adjustments), logging options (e.g., enable/disable logging), timeout durations, retry counts, and feature toggles.\n  - Is accessed and potentially updated dynamically during the execution of the preSend hook to adapt request handling based on current context or conditions.\n  - Influences the flow and outcome of the preSend hook, potentially altering request payloads, headers, or other metadata before transmission.\n  **Implementation guidance**\n  - Define clear, descriptive, and consistent keys within the configuration object to avoid ambiguity and ensure maintainability.\n  - Validate all configuration values rigorously before applying them to prevent runtime errors or unexpected behavior.\n  - Use this object to centralize control over preSend hook behavior, enabling easier updates, debugging, and feature management.\n  - Document all possible configuration options, their expected data types, default values, and their specific effects on the preSend hook’s operation.\n  - Ensure sensitive information within the configuration (e.g., authentication tokens) is handled securely, following best practices for encryption and access control.\n  - Consider versioning the configuration schema if multiple versions of the preSend hook or integration exist.\n  **Examples**\n  - `{ \"enableLogging\": true, \"authToken\": \"abc123\", \"modifyHeaders\": false }`\n  - `{ \"retryCount\": 3, \"timeout\": 5000 }`\n  - `{ \"useNonProduction\": true, \"customHeader\": \"X-Custom-Value\" }`\n  - `{ \"authenticationType\": \"OAuth2\", \"refreshToken\": \"xyz789\", \"logLevel\": \"verbose\" }`\n  - `{ \"enableCaching\": false, \"maxRetries\": 5, \"requestPriority\": \"high\" }`\n  **Important**"}},"description":"preSend is a hook function that is executed immediately before a RESTlet sends a response back to the client. It allows for last-minute modifications or logging of the response data, enabling customization of the output or performing additional processing steps prior to transmission. This hook provides a critical interception point to ensure the response adheres to business rules, compliance requirements, or client-specific formatting before it leaves the server.\n\n**Field behavior**\n- Invoked right before the RESTlet response is sent to the client.\n- Receives the response data as input and can modify it.\n- Can be used to log, audit, or transform the response payload.\n- Should return the final response object to be sent.\n- Supports both synchronous and asynchronous execution depending on the implementation.\n- Any changes made here directly impact the final output received by the client.\n\n**Implementation guidance**\n- Implement as a synchronous or asynchronous function depending on the environment and use case.\n- Ensure any modifications maintain the expected response format and data integrity.\n- Avoid long-running or blocking operations to prevent delaying the response delivery.\n- Handle errors gracefully within the hook to prevent disrupting the overall RESTlet response flow.\n- Validate the modified response to ensure it complies with API schema and client expectations.\n- Use this hook to enforce security measures such as masking sensitive data or adding audit trails.\n\n**Examples**\n- Adding a timestamp or metadata (e.g., request ID, processing duration) to the response object.\n- Masking or removing sensitive information (e.g., personal identifiers, confidential fields) from the response.\n- Logging response details for auditing or debugging purposes.\n- Transforming response data structure or formatting to match client-specific requirements.\n- Injecting additional headers or status information into the response payload.\n\n**Important notes**\n- This hook runs after all business logic but before the response is finalized and sent.\n- Modifications here directly affect what the client ultimately receives.\n- Errors thrown in this hook may cause the RESTlet to fail or return an error response.\n- Use this hook to enforce response-level policies, compliance, or data governance rules.\n- Avoid introducing side effects that could alter the idempotency or consistency of the response.\n- Testing this hook thoroughly is critical to ensure it does not unintentionally break client integrations.\n\n**Dependency chain**\n- Triggered after the main RESTlet processing logic completes and the response object is prepared.\n- Precedes the actual sending of the HTTP response to the client.\n- May depend on prior hooks or"}},"description":"hooks: >\n  An array of hook definitions that specify custom functions to be executed at various points during the lifecycle of the RESTlet script in NetSuite. These hooks enable developers to inject additional logic before or after standard processing events, allowing for extensive customization and extension of the RESTlet's behavior to meet specific business requirements.\n\n  **Field behavior**\n  - Defines one or more hooks that trigger custom code execution at designated lifecycle events.\n  - Hooks can be configured to run at standard lifecycle events such as beforeLoad, beforeSubmit, afterSubmit, or at custom-defined events tailored to specific needs.\n  - Each hook entry typically includes the event name, the callback function to execute, and optional parameters or context information.\n  - Supports both synchronous and asynchronous execution modes depending on the hook type and implementation context.\n  - Hooks execute in the order they are defined, allowing for controlled sequencing of custom logic.\n  - Hooks can modify input data, perform validations, log information, or alter output responses as needed.\n\n  **Implementation guidance**\n  - Ensure that each hook function is properly defined, accessible, and tested within the RESTlet script context to avoid runtime failures.\n  - Validate hook event names against the list of supported lifecycle events to prevent misconfiguration and errors.\n  - Use hooks to encapsulate reusable business logic, enforce data integrity, or integrate with external systems and services.\n  - Implement robust error handling within hook functions to prevent exceptions from disrupting the main RESTlet processing flow.\n  - Document each hook’s purpose, expected inputs, outputs, and side effects clearly to facilitate maintainability and future enhancements.\n  - Consider performance implications of hooks, especially those performing asynchronous operations or external calls, to maintain RESTlet responsiveness.\n  - When multiple hooks are defined for the same event, design them to avoid conflicts and ensure predictable outcomes.\n\n  **Examples**\n  - Defining a hook to validate and sanitize input data before processing a RESTlet request.\n  - Adding a hook to log detailed request and response information after the RESTlet completes execution for auditing purposes.\n  - Using a hook to modify or enrich the response payload dynamically before it is returned to the client application.\n  - Implementing a hook to trigger notifications or update related records asynchronously after data submission.\n  - Creating a custom hook event to perform additional security checks beyond standard validation.\n\n  **Important notes**\n  - Improper use or misconfiguration of hooks can lead to unexpected behavior, performance degradation, or runtime errors."},"cLocked":{"type":"object","description":"cLocked indicates whether the record is locked, preventing any modifications to its data.\n**Field behavior**\n- Represents the lock status of a record within the system.\n- When set to true, the record is locked and cannot be edited or updated.\n- When set to false, the record is unlocked and available for modifications.\n- Typically used to control concurrent access and maintain data integrity.\n**Implementation guidance**\n- Should be checked before performing update or delete operations on the record.\n- Setting this field to true should trigger UI or API restrictions on editing.\n- Ensure that only authorized users or processes can change the lock status.\n- Use this field to prevent race conditions or accidental data overwrites.\n**Examples**\n- cLocked: true — The record is locked and read-only.\n- cLocked: false — The record is unlocked and editable.\n**Important notes**\n- Locking a record does not necessarily prevent read access; it only restricts modifications.\n- The lock status may be temporary or permanent depending on business rules.\n- Changes to this field might require audit logging for compliance.\n**Dependency chain**\n- May depend on user permissions or roles to set or clear the lock.\n- Could be related to workflow states or approval processes that enforce locking.\n**Technical details**\n- Data type: Boolean.\n- Default value is typically false (unlocked).\n- Stored as a flag in the record metadata or status fields.\n- Changes to cLocked should be atomic to avoid inconsistent states."}},"description":"restlet: The identifier or URL of the NetSuite Restlet script to be invoked for performing custom server-side logic or data processing within the NetSuite environment. This property specifies which Restlet endpoint the integration or application should call to execute specific business logic, automate workflows, or retrieve and manipulate data dynamically. It can be represented as an internal script ID, a relative URL path, or a full external URL depending on the integration scenario and access method.\n\n**Field behavior**\n- Defines the specific target Restlet script or endpoint for API calls within the NetSuite environment.\n- Routes requests to custom server-side scripts developed using NetSuite’s SuiteScript framework.\n- Enables execution of tailored business processes, data validations, transformations, or integrations.\n- Supports various HTTP methods such as GET, POST, PUT, and DELETE depending on the Restlet’s implementation.\n- Can be specified as a script ID, a relative URL path, or a fully qualified URL based on deployment and access context.\n- Acts as the primary entry point for invoking custom logic that extends or complements standard NetSuite functionality.\n\n**Implementation guidance**\n- Confirm that the Restlet script is properly deployed, enabled, and accessible within the target NetSuite account.\n- Use the internal script ID format (e.g., \"customscript_my_restlet\") when calling via SuiteScript or internal APIs.\n- Use the relative URL path (e.g., \"/app/site/hosting/restlet.nl?script=123&deploy=1\") or full URL for external integrations or REST clients.\n- Verify that the Restlet supports the required HTTP methods and handles input/output data formats correctly (JSON, XML, etc.).\n- Secure the Restlet endpoint by implementing authentication mechanisms such as OAuth 2.0, token-based authentication, or NetSuite session credentials.\n- Implement robust error handling and retry logic to manage scenarios where the Restlet is unavailable or returns errors.\n- Test the Restlet thoroughly in a non-production environment before deploying to production to ensure expected behavior and security compliance.\n\n**Examples**\n- \"customscript_my_restlet\" (internal script ID used in SuiteScript calls)\n- \"/app/site/hosting/restlet.nl?script=123&deploy=1\" (relative URL for REST calls within NetSuite)\n- \"https://rest.netsuite.com/app/site/hosting/restlet.nl?script=456&deploy=2\" (full external URL for third-party integrations)\n- \"customscript_sales_order_processor\" (a Rest"},"distributed":{"type":"object","properties":{"recordType":{"type":"string","description":"The lowercase script ID of the NetSuite record type for the distributed export.\n\nMust be the exact lowercase script ID as defined in NetSuite (e.g., \"customer\", \"salesorder\", \"invoice\", \"vendorbill\").\nThis is NOT the display name - use the script ID which is always lowercase with no spaces.\n\n**Examples**\n- \"customer\"\n- \"invoice\"\n- \"salesorder\"\n- \"itemfulfillment\"\n- \"vendorbill\"\n- \"employee\"\n- \"purchaseorder\"\n- \"creditmemo\"\n\n**Important notes**\n- Must be lowercase script ID, not the display name\n- Custom record types use format \"customrecord_scriptid\""},"executionContext":{"type":"array","description":"An array of execution contexts that will trigger this distributed export.\n\nSpecifies which NetSuite execution contexts should trigger this export. When a record change occurs in one of the specified contexts, the export will be triggered.\n\n**Default value**\nIf not specified, defaults to: [\"userinterface\", \"webstore\"]\n\n**Valid values**\n- \"userinterface\" - User interactions in the NetSuite UI\n- \"webservices\" - SOAP web services calls\n- \"csvimport\" - CSV import operations\n- \"offlineclient\" - Offline client synchronization\n- \"portlet\" - Portlet interactions\n- \"scheduled\" - Scheduled script executions\n- \"suitelet\" - Suitelet executions\n- \"custommassupdate\" - Custom mass update operations\n- \"workflow\" - Workflow actions\n- \"webstore\" - Web store transactions\n- \"userevent\" - User event script triggers\n- \"mapreduce\" - Map/Reduce script operations\n- \"restlet\" - RESTlet API calls\n- \"webapplication\" - Web application interactions\n- \"restwebservices\" - REST web services calls\n\n**Example**\n```json\n[\"userinterface\", \"webstore\"]\n```","default":["userinterface","webstore"],"items":{"type":"string","enum":["userinterface","webservices","csvimport","offlineclient","portlet","scheduled","suitelet","custommassupdate","workflow","webstore","userevent","mapreduce","restlet","webapplication","restwebservices"]}},"disabled":{"type":"boolean","description":"disabled: Indicates whether the distributed feature in NetSuite is disabled or not. This boolean flag controls the availability and operational status of the distributed functionalities within the NetSuite integration, allowing administrators or systems to enable or disable these features as needed.\n\n**Field behavior**\n- When set to true, the distributed feature is fully disabled, preventing any distributed operations or workflows from executing.\n- When set to false or omitted, the distributed feature remains enabled and fully operational.\n- Acts as a toggle switch to control the accessibility of distributed capabilities within the NetSuite environment.\n- Changes to this flag directly influence the behavior of distributed-related processes and integrations.\n\n**Implementation guidance**\n- Use a boolean value: `true` to disable the distributed feature, `false` to enable it.\n- Before disabling, verify that no critical processes depend on distributed functionality to avoid disruptions.\n- Implement validation checks to confirm the current state before initiating distributed operations.\n- Provide clear user notifications or system logs when the feature is disabled to aid in troubleshooting and auditing.\n- Consider the impact on dependent modules and ensure coordinated updates if disabling this feature.\n\n**Examples**\n- `disabled: true`  # The distributed feature is turned off, disabling all related operations.\n- `disabled: false` # The distributed feature is active and available for use.\n- Omitted `disabled` property defaults to `false`, enabling the feature by default.\n\n**Important notes**\n- Disabling this feature may interrupt workflows or processes that rely on distributed capabilities, potentially causing failures or delays.\n- Some systems may require a restart or reinitialization after changing this setting for the change to take full effect.\n- Modifying this property should be restricted to users with appropriate permissions to prevent unauthorized disruptions.\n- Always assess the broader impact on the NetSuite integration before toggling this flag.\n\n**Dependency chain**\n- Directly affects modules and properties that rely on distributed functionality within the NetSuite integration.\n- Should be checked and respected by any API calls, workflows, or processes that involve distributed features.\n- May influence error handling and fallback mechanisms in distributed-related operations.\n\n**Technical details**\n- Data type: Boolean\n- Default value: `false` (distributed feature enabled)\n- Located under the `netsuite.distributed` namespace in the API schema\n- Changing this property triggers state changes in distributed feature availability within the system"},"executionType":{"type":"array","description":"An array of record operation types that will trigger this distributed export.\n\nSpecifies which types of record operations should trigger the export. When a record operation matches one of the specified types, the export will be triggered.\n\n**Default value**\nIf not specified, defaults to: [\"create\", \"edit\", \"xedit\"]\n\n**Valid values**\n- \"create\" - New record creation\n- \"edit\" - Record editing via UI\n- \"delete\" - Record deletion\n- \"xedit\" - Inline editing (edit without opening the record)\n- \"copy\" - Record copy operation\n- \"view\" - Record view\n- \"cancel\" - Transaction cancellation\n- \"approve\" - Approval action\n- \"reject\" - Rejection action\n- \"pack\" - Pack operation (fulfillment)\n- \"ship\" - Ship operation (fulfillment)\n- \"markcomplete\" - Mark as complete\n- \"reassign\" - Reassignment action\n- \"editforecast\" - Forecast editing\n- \"dropship\" - Drop ship operation\n- \"specialorder\" - Special order operation\n- \"orderitems\" - Order items action\n- \"paybills\" - Pay bills action\n- \"print\" - Print action\n- \"email\" - Email action\n\n**Example**\n```json\n[\"create\", \"edit\", \"xedit\"]\n```","default":["create","edit","xedit"],"items":{"type":"string","enum":["create","edit","delete","xedit","copy","view","cancel","approve","reject","pack","ship","markcomplete","reassign","editforecast","dropship","specialorder","orderitems","paybills","print","email"]}},"qualifier":{"type":"object","description":"qualifier: A string value used to specify a particular qualifier or modifier that further defines, categorizes, or scopes the associated data within the NetSuite distributed context. This property enables more granular identification, filtering, and processing of data by applying specific criteria or attributes relevant to business logic, integration workflows, or operational requirements. It serves as an optional but powerful tool to distinguish data subsets, enhance data semantics, and support conditional handling in distributed NetSuite environments.\n\n**Field behavior**\n- Acts as an additional identifier or modifier to refine the meaning, scope, or classification of the associated data.\n- Enables filtering, categorization, or qualification of data entries in distributed NetSuite operations based on specific business rules.\n- Typically optional but may be mandatory in certain contexts or API endpoints where precise data segmentation is required.\n- Accepts string values that correspond to predefined, standardized, or custom qualifiers recognized by the system or integration layer.\n- Supports multiple use cases including regional segmentation, priority tagging, type classification, and channel identification.\n\n**Implementation guidance**\n- Ensure the qualifier value strictly aligns with the accepted set of qualifiers defined in the business domain, integration specifications, or system configuration.\n- Implement validation mechanisms to verify that the qualifier string matches allowed or expected values to avoid errors, misclassification, or unintended behavior.\n- Adopt consistent naming conventions and formatting standards (e.g., lowercase, hyphen-separated) for qualifiers to maintain clarity, readability, and interoperability across systems.\n- Maintain comprehensive documentation of all custom and standard qualifiers used, including their intended meaning and usage scenarios, to facilitate maintenance, troubleshooting, and future integrations.\n- Consider the impact of qualifiers on downstream processing, reporting, and analytics to ensure they are leveraged effectively and do not introduce ambiguity.\n\n**Examples**\n- \"region-us\" to specify data related to the United States region.\n- \"priority-high\" to indicate transactions or records with high priority status.\n- \"type-inventory\" to qualify records associated with inventory management.\n- \"channel-online\" to denote sales or operations conducted through online channels.\n- \"segment-enterprise\" to classify data pertaining to enterprise-level customers.\n- \"status-active\" to filter or identify active records within a dataset.\n\n**Important notes**\n- The qualifier should be meaningful, contextually relevant, and aligned with the business logic to ensure accurate data interpretation.\n- Incorrect, inconsistent, or ambiguous qualifiers can lead to data misinterpretation, processing errors, or integration failures.\n- The property may interact with other filtering"},"skipExportFieldId":{"type":"string","description":"skipExportFieldId is an identifier for a specific field within the NetSuite distributed configuration that determines whether certain data should be excluded from export processes. It serves as a control mechanism to selectively omit data associated with particular fields during export operations, enabling tailored and efficient data handling.\n\n**Field behavior:**  \n- Acts as a flag or marker to skip exporting data associated with the specified field ID.  \n- When set, the export routines will omit the data linked to this field from being included in the export payload.  \n- Helps control and customize the export behavior on a per-field basis within distributed NetSuite configurations.  \n- Does not affect data visibility or storage within NetSuite; it only influences export output.  \n- Supports multiple uses in scenarios where sensitive, redundant, or irrelevant data should be excluded from exports.\n\n**Implementation guidance:**  \n- Ensure the field ID provided corresponds to a valid and existing field within the NetSuite schema to prevent export errors.  \n- Use this property to optimize export operations by excluding unnecessary or sensitive data fields, improving performance and compliance.  \n- Validate the field ID format and existence before applying it to avoid runtime issues during export.  \n- Integrate with export logic to check this property before including fields in the export output, ensuring consistent behavior.  \n- Consider maintaining a centralized list or configuration of skipExportFieldIds for easier management and auditing.  \n\n**Examples:**  \n- skipExportFieldId: \"custbody_internal_notes\" (skips exporting the internal notes custom field)  \n- skipExportFieldId: \"item_custom_field_123\" (excludes a specific item custom field from export)  \n- skipExportFieldId: \"custentity_sensitive_data\" (prevents export of sensitive customer entity data)  \n\n**Important notes:**  \n- This property only affects export operations and does not alter data storage or visibility within NetSuite.  \n- Misconfiguration may lead to incomplete data exports if critical fields are skipped unintentionally, potentially impacting downstream processes.  \n- Should be used judiciously to maintain data integrity and compliance with business rules and regulatory requirements.  \n- Changes to this property should be documented and reviewed to avoid unintended data omissions.  \n\n**Dependency chain**\n- Depends on the existence and validity of the specified field ID within the NetSuite schema.  \n- Relies on export routines to check and respect this property during data export processes.  \n- May interact with other export configuration settings that control data inclusion/exclusion."},"hooks":{"type":"object","properties":{"preSend":{"type":"object","properties":{"fileInternalId":{"type":"string","description":"fileInternalId: The unique internal identifier assigned to a file within the NetSuite system. This identifier is essential for accurately referencing and manipulating a specific file during various operations such as retrieval, update, or deletion within NetSuite's environment. It acts as a primary key that ensures precise targeting of files in automated workflows, scripts, and API calls.\n\n**Field behavior**\n- Serves as a unique and immutable key to identify a file in the NetSuite file cabinet.\n- Utilized in pre-send hooks and other automation points to specify the exact file being processed or referenced.\n- Must correspond to an existing file’s internal ID within the NetSuite account to ensure valid operations.\n- Enables consistent and reliable file operations by linking actions directly to the file’s system-assigned identifier.\n\n**Implementation guidance**\n- Always ensure the value is a valid integer that corresponds to an existing file’s internal ID in NetSuite.\n- Validate the internal ID before performing any file operations to avoid runtime errors or failed transactions.\n- Use this ID when invoking NetSuite APIs, SuiteScript, or other integration points to fetch, update, or delete files.\n- Avoid hardcoding the internal ID; instead, dynamically retrieve it through queries or API calls to maintain adaptability and reduce maintenance overhead.\n- Handle exceptions gracefully when the ID does not correspond to any file, providing meaningful error messages or fallback logic.\n\n**Examples**\n- 12345\n- 987654\n- 1001\n\n**Important notes**\n- The internal ID is system-generated by NetSuite and guaranteed to be unique within the account.\n- This ID is distinct from file names, external URLs, or folder identifiers and should not be confused with them.\n- Using an incorrect or non-existent internal ID will cause operations to fail, potentially interrupting workflows.\n- The internal ID remains constant for the lifetime of the file and does not change even if the file is moved or renamed.\n\n**Dependency chain**\n- Depends on the file existing in the NetSuite file cabinet prior to referencing.\n- Often used alongside related properties such as file name, folder ID, file type, or metadata to provide context or additional filtering.\n- May be required input for downstream processes that manipulate or validate file contents.\n\n**Technical details**\n- Represented as an integer value assigned by NetSuite upon file creation.\n- Immutable once assigned; cannot be altered or reassigned to a different file.\n- Used internally by NetSuite APIs, SuiteScript, and integration"},"function":{"type":"string","description":"function: >\n  Specifies the name of the custom function to be executed as a pre-send hook within the NetSuite distributed system. This function is invoked immediately before sending data or requests, allowing for custom processing, validation, or modification of the payload to ensure data integrity and compliance with business rules.\n\n  **Field behavior**\n  - Defines the exact function to be called prior to sending data or requests.\n  - Enables interception, inspection, and manipulation of data before transmission.\n  - Supports integration of custom business logic, validation, enrichment, or logging steps.\n  - Must reference a valid, accessible function within the current execution context or environment.\n  - The function’s execution outcome can influence whether the sending process proceeds, is modified, or is aborted.\n\n  **Implementation guidance**\n  - Ensure the function name corresponds exactly to a defined function in the codebase, script environment, or registered hooks.\n  - The function should accept the expected input parameters (such as the payload or context) and return appropriate results or modifications.\n  - Implement robust error handling within the function to prevent unhandled exceptions that could disrupt the sending workflow.\n  - Document the function’s purpose, input/output contract, and side effects clearly for maintainability and future reference.\n  - Validate that the function executes efficiently and completes promptly to avoid introducing latency or blocking the sending process.\n  - If asynchronous operations are necessary, ensure they are properly awaited or handled to guarantee completion before sending.\n  - Follow consistent naming conventions aligned with the overall codebase or organizational standards.\n\n  **Examples**\n  - \"validateCustomerData\"\n  - \"sanitizePayloadBeforeSend\"\n  - \"logPreSendActivity\"\n  - \"customAuthorizationCheck\"\n  - \"enrichOrderDetails\"\n  - \"checkInventoryAvailability\"\n\n  **Important notes**\n  - The function must be synchronous or correctly handle asynchronous behavior to ensure it completes before the send operation proceeds.\n  - If the function throws an error or returns a failure state, it may block, modify, or abort the sending process depending on the implementation.\n  - Avoid performing long-running or blocking operations within the function to maintain system responsiveness.\n  - The function should not perform irreversible side effects unless explicitly intended, as it runs prior to data transmission.\n  - Ensure the function does not introduce security vulnerabilities, such as exposing sensitive data or allowing injection attacks.\n  - Consistent and clear error reporting within the function aids in troubleshooting and operational monitoring.\n\n  **Dependency**"},"configuration":{"type":"object","description":"configuration: >\n  Configuration settings for the preSend hook in the NetSuite distributed system.\n  This property defines the parameters and options that control the behavior and execution of the preSend hook, allowing customization of how data is processed before being sent.\n  It enables fine-tuning of operational aspects such as retries, timeouts, validation rules, logging, and payload constraints to ensure reliable and efficient data transmission.\n  **Field behavior**\n  - Specifies the customizable settings that dictate how the preSend hook operates.\n  - Controls data manipulation, validation, and preparation steps prior to sending.\n  - Can include flags, thresholds, retry policies, timeout durations, logging options, and other operational parameters.\n  - May be optional or mandatory depending on the specific implementation and requirements of the preSend hook.\n  - Supports nested configuration objects to allow detailed and structured settings.\n  **Implementation guidance**\n  - Define clear, well-documented configuration options that directly impact the preSend process.\n  - Validate all configuration values rigorously to ensure they conform to expected data types, ranges, and formats.\n  - Provide sensible default values for optional parameters to enhance usability and reduce configuration errors.\n  - Ensure backward compatibility when extending or modifying configuration options.\n  - Include comprehensive documentation for each configuration parameter, including its purpose, accepted values, and effect on hook behavior.\n  - Consider security implications when allowing configuration of headers or other sensitive parameters.\n  **Examples**\n  - `{ \"retryCount\": 3, \"timeout\": 5000, \"enableLogging\": true }`\n  - `{ \"validateSchema\": true, \"maxPayloadSize\": 1048576 }`\n  - `{ \"customHeaders\": { \"X-Custom-Header\": \"value\" } }`\n  - `{ \"retryPolicy\": { \"maxAttempts\": 5, \"backoffStrategy\": \"exponential\" }, \"enableLogging\": false }`\n  - `{ \"payloadCompression\": \"gzip\", \"timeout\": 10000 }`\n  **Important notes**\n  - Incorrect or invalid configuration values can cause the preSend hook to fail or behave unpredictably.\n  - Thorough testing of configuration changes in development or staging environments is critical before deploying to production.\n  - Some configuration changes may require restarting or reinitializing the hook or related services to take effect.\n  - Sensitive configuration parameters should be handled securely to prevent exposure of confidential information.\n  - Configuration should be version-controlled and documented to facilitate maintenance"}},"description":"preSend is a hook function that is invoked immediately before a request is sent to the NetSuite API. It allows for custom processing, modification, or validation of the request payload and headers, enabling dynamic adjustments or logging prior to transmission.\n\n**Field behavior**\n- Executed synchronously or asynchronously just before the API request is dispatched to the NetSuite endpoint.\n- Receives the full request object, including headers, body, query parameters, and other relevant metadata.\n- Permits modification of any part of the request, such as altering headers, adjusting the payload, or changing query parameters.\n- Supports validation logic to ensure the request meets required criteria; throwing an error will abort the request.\n- Enables injection of dynamic data like authentication tokens, custom headers, or correlation IDs.\n- Can be used for logging or auditing outgoing request details for debugging or monitoring purposes.\n\n**Implementation guidance**\n- Implement as a function or asynchronous callback that accepts the request context object.\n- Ensure that any asynchronous operations within the hook are properly awaited to maintain request integrity.\n- Keep processing lightweight to avoid introducing latency or blocking the request pipeline.\n- Handle exceptions carefully; unhandled errors will prevent the request from being sent.\n- Centralize request customization logic here to improve maintainability and reduce duplication.\n- Avoid side effects that could impact other parts of the system or subsequent requests.\n- Validate inputs thoroughly to prevent malformed requests from being sent to the API.\n\n**Examples**\n- Adding a Bearer token or API key to the Authorization header dynamically before sending.\n- Logging the complete request payload and headers for troubleshooting network issues.\n- Modifying request parameters based on user roles or feature flags at runtime.\n- Validating that required fields are present and correctly formatted, throwing an error if validation fails.\n- Adding a unique request ID header for tracing requests across distributed systems.\n\n**Important notes**\n- This hook executes on every outgoing request, so its performance impact should be minimized.\n- Any modifications made within preSend directly affect the final request sent to NetSuite.\n- Throwing an error inside this hook will abort the request and propagate the error upstream.\n- This hook is strictly for pre-request processing and should not be used for handling responses.\n- Avoid making network calls or heavy computations inside this hook to prevent delays.\n- Ensure thread safety if the hook accesses shared resources or global state.\n\n**Dependency chain**\n- Invoked after request construction but before the request is dispatched.\n- Precedes any network transmission or retry"}},"description":"hooks: >\n  A collection of user-defined functions or callbacks that are executed at specific points during the lifecycle of the distributed process within the NetSuite integration. These hooks enable customization and extension of the default behavior by injecting custom logic before, during, or after key operations, allowing for flexible adaptation to unique business requirements and integration scenarios.\n\n  **Field behavior**\n  - Contains one or more functions or callback references mapped to specific lifecycle events.\n  - Each hook corresponds to a distinct event or stage in the distributed process, such as pre-processing, post-processing, error handling, or data transformation.\n  - Hooks are invoked automatically by the system at predefined points in the workflow.\n  - Can modify data payloads, trigger additional workflows or external API calls, perform validations, or handle errors.\n  - Supports both synchronous and asynchronous execution models depending on the hook’s purpose and implementation.\n  - Execution order of hooks for the same event is deterministic and should be documented.\n  - Hooks should be designed to avoid side effects that could impact other parts of the process.\n\n  **Implementation guidance**\n  - Define hooks as named functions or references to executable code blocks compatible with the integration environment.\n  - Ensure hooks are idempotent to prevent unintended consequences from repeated or retried executions.\n  - Validate all inputs and outputs rigorously within hooks to maintain data integrity and system stability.\n  - Use hooks to integrate with external systems, perform custom validations, enrich data, or implement business-specific logic.\n  - Document each hook’s purpose, expected inputs, outputs, and any side effects clearly for maintainability.\n  - Implement robust error handling within hooks to gracefully manage exceptions without disrupting the main process flow.\n  - Test hooks thoroughly in isolated and integrated environments to ensure reliability and performance.\n  - Consider security implications, such as data exposure or injection risks, when implementing hooks.\n\n  **Examples**\n  - A hook that validates transaction data before it is sent to NetSuite to ensure compliance with business rules.\n  - A hook that logs detailed transaction metadata after a successful operation for auditing purposes.\n  - A hook that modifies or enriches payload data during transformation stages to align with NetSuite’s schema.\n  - A hook that triggers email or system notifications upon error occurrences to alert support teams.\n  - A hook that retries failed operations with exponential backoff to improve resilience.\n\n  **Important notes**\n  - Improperly implemented hooks can cause process failures, data inconsistencies, or performance degradation."},"sublists":{"type":"object","description":"sublists: >\n  A collection of related sublist objects associated with the main record, representing grouped sets of data entries that provide additional details or linked information within the NetSuite distributed record context. These sublists enable the organization of complex, hierarchical data by encapsulating related records or line items that belong to the primary record, facilitating detailed data management and interaction within the system.\n  **Field behavior**\n  - Contains multiple sublist entries, each representing a distinct group of related data tied to the main record.\n  - Organizes and structures complex record information into manageable, logically grouped sections.\n  - Supports nested and hierarchical data representation, allowing for detailed and granular record composition.\n  - Typically handled as arrays or lists of sublist objects, supporting iteration and manipulation.\n  - Reflects one-to-many relationships inherent in NetSuite records, such as line items or related entities.\n  **Implementation guidance**\n  - Ensure each sublist object strictly adheres to the defined schema and data types for its specific sublist type.\n  - Validate the consistency and referential integrity of sublist data in relation to the main record to prevent data anomalies.\n  - Design for dynamic handling of sublists, accommodating varying sizes including empty or large collections.\n  - Implement robust CRUD (Create, Read, Update, Delete) operations for sublist entries to maintain accurate and up-to-date data.\n  - Consider transactional integrity when modifying sublists to ensure changes are atomic and consistent.\n  **Examples**\n  - A sales order record containing sublists for item lines detailing products, quantities, and prices; shipping addresses specifying delivery locations; and payment schedules outlining installment plans.\n  - An employee record with sublists for dependents listing family members; employment history capturing previous roles and durations; and certifications documenting professional qualifications.\n  - A customer record including sublists for contacts with communication details; transactions recording purchase history; and communication logs tracking interactions and notes.\n  **Important notes**\n  - Sublists are critical for accurately modeling one-to-many relationships within NetSuite records, enabling detailed data capture and reporting.\n  - Modifications to sublists can trigger business logic, workflows, or validations that affect overall record processing.\n  - Maintaining synchronization between sublists and the main record is essential to preserve data integrity and prevent inconsistencies.\n  - Performance considerations should be taken into account when handling large sublists to optimize system responsiveness.\n  **Dependency chain**\n  - Depends on the main record schema"},"referencedFields":{"type":"object","description":"referencedFields: >\n  A list of field identifiers that are referenced within the current context, typically used to denote dependencies or relationships between fields in a NetSuite distributed environment. This property helps in mapping out how different fields interact or rely on each other, facilitating data integrity, validation, and synchronization across distributed components or services.\n  **Field behavior**\n  - Contains identifiers of fields that the current field or process depends on or interacts with.\n  - Used to establish explicit relationships or dependencies between multiple fields.\n  - Enables tracking of data flow and ensures consistency across distributed systems.\n  - Supports dynamic resolution of dependencies during runtime or configuration.\n  **Implementation guidance**\n  - Populate with valid and existing field identifiers as defined in the NetSuite schema or metadata.\n  - Verify that all referenced fields are accessible and correctly scoped within the current context.\n  - Use this property to manage dependencies critical for data validation, synchronization, or processing logic.\n  - Keep the list updated to reflect any schema changes to avoid broken references or inconsistencies.\n  - Avoid circular references by carefully managing dependencies between fields.\n  **Examples**\n  - [\"customerId\", \"orderDate\", \"shippingAddress\"]\n  - [\"invoiceNumber\", \"paymentStatus\"]\n  - [\"productCode\", \"inventoryLevel\", \"reorderThreshold\"]\n  **Important notes**\n  - Referenced fields must be unique within the list to prevent redundancy and confusion.\n  - Modifications to referenced fields can impact dependent processes; changes should be tested thoroughly.\n  - This property contains only the identifiers (names or keys) of fields, not their actual data or values.\n  - Proper documentation of referenced fields improves maintainability and clarity of dependencies.\n  **Dependency chain**\n  - Often linked with fields that require validation or data aggregation from other fields.\n  - May influence or be influenced by business rules, workflows, or automation scripts that depend on multiple fields.\n  - Changes in referenced fields can cascade to affect dependent fields or processes.\n  **Technical details**\n  - Data type: Array of strings.\n  - Each string represents a unique field identifier within the NetSuite distributed environment.\n  - The array should be serialized in a format compatible with the consuming system (e.g., JSON array).\n  - Maximum length and allowed characters for field identifiers should conform to NetSuite naming conventions."},"relatedLists":{"type":"object","description":"relatedLists: A collection of related list objects that represent associated records or entities linked to the primary record within the NetSuite distributed data model. These related lists provide contextual information and enable navigation to connected data, facilitating comprehensive data retrieval and management. Each related list encapsulates a set of records that share a defined relationship with the primary record, such as transactions, contacts, or custom entities, thereby supporting a holistic view of the data ecosystem.\n\n**Field behavior**\n- Contains multiple related list entries, each representing a distinct association to the primary record.\n- Enables retrieval of linked records such as transactions, custom records, subsidiary data, or other relevant entities.\n- Supports hierarchical or relational data structures by referencing related entities, allowing nested or multi-level associations.\n- Typically read-only in the context of distributed data retrieval but may support updates or synchronization depending on API capabilities and permissions.\n- May include metadata such as record counts, last updated timestamps, or status indicators for each related list.\n- Supports dynamic inclusion or exclusion based on user permissions, record type, and system configuration.\n\n**Implementation guidance**\n- Populate with relevant related list objects that are directly associated with the primary record, ensuring accurate representation of relationships.\n- Ensure each related list entry includes unique identifiers, descriptive metadata, and navigation links or references necessary for data access and traversal.\n- Maintain consistency in naming conventions, data structures, and field formats to align with NetSuite’s standard data model and API specifications.\n- Implement pagination, filtering, or sorting mechanisms to efficiently handle large sets of related records within each list.\n- Validate all references and links to ensure data integrity, preventing broken or stale connections within the distributed data environment.\n- Consider caching strategies or incremental updates to optimize performance when dealing with frequently accessed related lists.\n- Respect and enforce access control and permission checks to ensure users only see related lists they are authorized to access.\n\n**Examples**\n- A customer record’s relatedLists might include “Transactions” (e.g., sales orders, invoices), “Contacts” (associated individuals), and “Cases” (customer support tickets).\n- An invoice record’s relatedLists could contain “Payments” (payment records), “Shipments” (delivery details), and “Adjustments” (billing corrections).\n- A custom record type might have relatedLists such as “Attachments” (files linked to the record) or “Notes” (user comments or annotations).\n- A vendor record’s relatedLists may include “Purchase Orders,” “Bills,” and “Vendor Contacts"},"forceReload":{"type":"boolean","description":"forceReload: Indicates whether the system should forcibly reload the data or configuration, bypassing any cached or stored versions to ensure the most up-to-date information is used. This flag is critical in scenarios where data accuracy and freshness are paramount, such as after configuration changes or data updates that must be immediately reflected.\n\n**Field behavior:**\n- When set to true, the system bypasses all caches and reloads data or configurations directly from the primary source, ensuring the latest state is retrieved.\n- When set to false or omitted, the system may utilize cached or previously stored data to optimize performance and reduce load times.\n- Primarily used in contexts where stale data could lead to errors, inconsistencies, or outdated processing results.\n- The reload operation triggered by this flag typically involves invalidating caches and refreshing dependent components or services.\n\n**Implementation guidance:**\n- Use this flag judiciously to balance between data freshness and system performance, avoiding unnecessary reloads that could degrade responsiveness.\n- Ensure that enabling forceReload initiates a comprehensive refresh cycle, including clearing relevant caches and reinitializing configuration or data layers.\n- Implement robust error handling during the reload process to manage potential failures without causing system downtime or inconsistent states.\n- Monitor system resource utilization and response times when forceReload is active to identify and mitigate performance bottlenecks.\n- Document scenarios and triggers for using forceReload to guide developers and operators in its appropriate application.\n\n**Examples:**\n- forceReload: true  \n  (forces the system to bypass caches and reload data/configuration from the authoritative source immediately)\n- forceReload: false  \n  (allows the system to serve data from cache if available, improving response time)\n- forceReload omitted  \n  (defaults to false behavior, relying on cached data unless otherwise specified)\n\n**Important notes:**\n- Excessive or unnecessary use of forceReload can lead to increased latency, higher resource consumption, and potential service degradation.\n- This flag does not validate the correctness or integrity of the source data; it only ensures the latest available data is fetched.\n- Downstream systems or processes should be designed to handle the potential delays or transient states caused by forced reloads.\n- Coordination with cache invalidation policies and data synchronization mechanisms is essential to maintain overall system consistency.\n\n**Dependency chain:**\n- Relies on underlying cache management and invalidation frameworks to effectively bypass stored data.\n- Interacts with data retrieval modules, configuration loaders, and possibly distributed synchronization services.\n- May trigger logging, monitoring, or"},"ioEnvironment":{"type":"string","description":"ioEnvironment specifies the input/output environment configuration for the NetSuite distributed system, defining how data is handled, processed, and routed across different operational environments. This property determines the context in which I/O operations occur, influencing data flow, security protocols, performance characteristics, and consistency guarantees within the distributed architecture.\n\n**Field behavior**\n- Determines the operational context for all input/output processes within the distributed NetSuite system.\n- Influences how data is read from and written to various storage systems, message queues, or communication channels.\n- Affects performance tuning, security measures, and data consistency mechanisms based on the selected environment.\n- Typically set during system initialization or configuration phases and remains stable during runtime to ensure predictable behavior.\n- May trigger environment-specific logging, monitoring, and error-handling strategies.\n\n**Implementation guidance**\n- Validate the ioEnvironment value against a predefined set of supported environments such as \"development,\" \"staging,\" \"production,\" and any custom configurations.\n- Ensure that the selected environment is compatible with other system settings related to data handling, network communication, and security policies.\n- Implement robust error handling and fallback mechanisms to manage unsupported or invalid environment values gracefully.\n- Clearly document the operational implications, limitations, and recommended use cases for each environment option to guide system administrators and developers.\n- Coordinate environment settings across all distributed nodes to maintain consistency and prevent configuration drift.\n\n**Examples**\n- \"development\" — used for local testing and debugging with relaxed security and simplified data handling.\n- \"staging\" — a pre-production environment that closely mirrors production settings for validation and testing.\n- \"production\" — the live environment optimized for security, performance, and data integrity.\n- \"custom\" — user-defined environment configurations tailored for specialized I/O requirements or experimental setups.\n\n**Important notes**\n- Changing the ioEnvironment typically requires restarting services or reinitializing connections to apply new configurations.\n- The environment setting directly impacts data integrity, access controls, and compliance with security policies.\n- Sensitive data must be handled according to the security standards appropriate for the selected environment.\n- Consistency across all distributed nodes is critical; all nodes should be configured with compatible ioEnvironment values to avoid data inconsistencies or communication failures.\n- Misconfiguration can lead to degraded performance, security vulnerabilities, or data loss.\n\n**Dependency chain**\n- Depends on system initialization and configuration management components.\n- Interacts with data storage modules, network communication layers, and security frameworks.\n- Influences logging, monitoring, and error"},"ioDomain":{"type":"string","description":"ioDomain specifies the Internet domain name used for input/output operations within the distributed NetSuite environment. This domain is critical for routing data requests and responses between distributed components and services, ensuring seamless communication and integration across the system.\n\n**Field behavior**\n- Defines the domain name utilized for network communication in distributed NetSuite environments.\n- Serves as the base domain for constructing URLs for API calls, data synchronization, and service endpoints.\n- Must be a valid, fully qualified domain name (FQDN) adhering to DNS standards.\n- Typically remains consistent within a deployment environment but can differ across environments such as development, staging, and production.\n- Influences routing, load balancing, and failover mechanisms within distributed services.\n\n**Implementation guidance**\n- Verify that the domain is properly configured in DNS and is resolvable by all distributed components.\n- Validate the domain format against standard domain naming conventions (e.g., RFC 1035).\n- Ensure the domain supports secure communication protocols (e.g., HTTPS with valid SSL/TLS certificates).\n- Coordinate updates to ioDomain with network, security, and operations teams to maintain service continuity.\n- When migrating or scaling services, update ioDomain accordingly and propagate changes to all dependent components.\n- Monitor domain accessibility and performance to detect and resolve connectivity issues promptly.\n\n**Examples**\n- \"api.netsuite.com\"\n- \"distributed-services.companydomain.com\"\n- \"staging-netsuite.io.company.com\"\n- \"eu-west-1.api.netsuite.com\"\n- \"dev-networks.internal.company.com\"\n\n**Important notes**\n- Incorrect or misconfigured ioDomain values can cause failed network requests, service interruptions, and data synchronization errors.\n- The domain must support necessary security certificates to enable encrypted communication and protect data in transit.\n- Changes to ioDomain may necessitate updates to firewall rules, proxy configurations, and network security policies.\n- Consistency in ioDomain usage across distributed components is essential to avoid routing conflicts and authentication issues.\n- Consider the impact on caching, CDN configurations, and DNS propagation delays when changing ioDomain.\n\n**Dependency chain**\n- Dependent on underlying network infrastructure, DNS setup, and domain registration.\n- Utilized by distributed service components for constructing communication endpoints.\n- May affect authentication and authorization workflows that rely on domain validation or origin verification.\n- Interacts with security components such as SSL/TLS certificate management and firewall configurations.\n- Influences monitoring, logging, and troubleshooting processes related to network communication."},"lastSyncedDate":{"type":"string","format":"date-time","description":"lastSyncedDate represents the precise date and time when the data was last successfully synchronized between the system and NetSuite. This timestamp is essential for monitoring the freshness, consistency, and integrity of synchronized data, enabling systems to determine whether updates or incremental syncs are necessary.\n\n**Field behavior**\n- Captures the exact date and time of the most recent successful synchronization event.\n- Automatically updates only after a sync operation completes successfully without errors.\n- Serves as a reference point to assess if data is current or requires refreshing.\n- Typically stored and transmitted in ISO 8601 format to maintain uniformity across different systems and platforms.\n- Does not reflect the start time or duration of the synchronization process, only its successful completion.\n\n**Implementation guidance**\n- Record the timestamp in Coordinated Universal Time (UTC) to prevent timezone-related inconsistencies.\n- Update this field exclusively after confirming a successful synchronization to avoid misleading data states.\n- Validate the date and time format rigorously to comply with ISO 8601 standards (e.g., \"YYYY-MM-DDTHH:mm:ssZ\").\n- Utilize this timestamp to drive incremental synchronization logic, data refresh triggers, or audit trails in downstream workflows.\n- Handle cases where the field may be null or missing, indicating that no synchronization has occurred yet.\n\n**Examples**\n- \"2024-06-15T14:30:00Z\"\n- \"2023-12-01T08:45:22Z\"\n- \"2024-01-10T23:59:59Z\"\n\n**Important notes**\n- This timestamp marks the completion of synchronization, not its initiation.\n- Do not update this field if the synchronization process fails or is incomplete.\n- Maintaining timezone consistency (UTC) is critical to avoid synchronization conflicts or data mismatches.\n- The field may be null or omitted if synchronization has never been performed.\n- Systems relying on this field should implement fallback or error handling for missing or invalid timestamps.\n\n**Dependency chain**\n- Depends on successful completion of the synchronization process between the system and NetSuite.\n- Influences downstream processes such as incremental sync triggers, data validation, and audit logging.\n- May be referenced by monitoring or alerting systems to detect synchronization delays or failures.\n\n**Technical details**\n- Stored as a string in ISO 8601 format with UTC timezone designator (e.g., \"YYYY-MM-DDTHH:mm:ssZ\").\n- Should be generated programmatically at the moment synchronization completes successfully"},"settings":{"type":"object","description":"settings: >\n  Configuration settings specific to the distributed module within the NetSuite integration, enabling fine-grained control over distributed processing behavior and performance optimization.\n  **Field behavior**\n  - Encapsulates a collection of key-value pairs representing various configuration parameters that govern the distributed NetSuite integration’s operation.\n  - Includes toggles (boolean flags), numeric thresholds, timeouts, batch sizes, logging options, and other customizable settings relevant to distributed processing workflows.\n  - Typically optional for basic usage but essential for advanced customization, performance tuning, and adapting the integration to specific deployment environments.\n  - Changes to these settings can dynamically alter the integration’s behavior, such as retry logic, concurrency limits, and error handling strategies.\n  **Implementation guidance**\n  - Define each setting with a clear, descriptive key name and an appropriate data type (e.g., integer, boolean, string).\n  - Validate input values rigorously to ensure they fall within acceptable ranges or conform to expected formats to prevent runtime errors.\n  - Provide sensible default values for all settings to maintain stable and predictable integration behavior when explicit configuration is absent.\n  - Document each setting comprehensively, including its purpose, valid values, default, and impact on the integration’s operation.\n  - Consider versioning or schema validation to manage changes in settings structure over time.\n  - Ensure that sensitive information is either excluded or securely handled if included within settings.\n  **Examples**\n  - `{ \"retryCount\": 3, \"enableLogging\": true, \"timeoutSeconds\": 120 }` — configures retry attempts, enables detailed logging, and sets operation timeout.\n  - `{ \"batchSize\": 50, \"useNonProduction\": false }` — sets the number of records processed per batch and specifies production environment usage.\n  - `{ \"maxConcurrentJobs\": 10, \"errorThreshold\": 5, \"logLevel\": \"DEBUG\" }` — limits concurrent jobs, sets error tolerance, and defines logging verbosity.\n  **Important notes**\n  - Modifications to settings may require restarting or reinitializing the integration service to apply changes effectively.\n  - Incorrect or suboptimal configuration can cause integration failures, data inconsistencies, or degraded performance.\n  - Avoid storing sensitive credentials or secrets in settings unless encrypted or otherwise secured.\n  - Settings should be managed carefully in multi-environment deployments to prevent configuration drift.\n  **Dependency chain**\n  - Dependent on the overall NetSuite integration configuration and"},"useSS2Framework":{"type":"boolean","description":"useSS2Framework indicates whether to utilize the SuiteScript 2.0 framework for the NetSuite distributed configuration, enabling modern scripting capabilities and modular architecture within the NetSuite environment.\n\n**Field behavior**\n- Determines if the SuiteScript 2.0 (SS2) framework is enabled for the NetSuite integration.\n- When set to true, the system uses SS2 APIs, modular script definitions, and updated scripting conventions.\n- When set to false or omitted, the system defaults to using SuiteScript 1.0 or legacy frameworks.\n- Influences script loading mechanisms, module resolution, and API compatibility within NetSuite.\n- Impacts debugging, deployment, and maintenance processes due to differences in framework structure.\n\n**Implementation guidance**\n- Set this property to true to leverage modern SuiteScript 2.0 features such as improved modularity, asynchronous processing, and enhanced performance.\n- Verify that all custom scripts, modules, and third-party integrations are fully compatible with SuiteScript 2.0 before enabling this flag.\n- Conduct thorough testing in a non-production or development environment to identify potential issues arising from framework changes.\n- Use this flag to facilitate gradual migration from legacy SuiteScript 1.0 to SuiteScript 2.0, allowing toggling between frameworks during transition phases.\n- Update deployment pipelines and CI/CD processes to accommodate SuiteScript 2.0 packaging and module formats.\n\n**Examples**\n- `useSS2Framework: true` — Enables SuiteScript 2.0 framework usage, activating modern scripting features.\n- `useSS2Framework: false` — Disables SuiteScript 2.0, falling back to legacy SuiteScript 1.0 framework.\n- Property omitted — Defaults to legacy SuiteScript framework (typically 1.0), maintaining backward compatibility.\n\n**Important notes**\n- Enabling the SS2 framework may require refactoring existing scripts to comply with SuiteScript 2.0 syntax, including the use of define/require for module loading.\n- Some legacy APIs, global objects, and modules available in SuiteScript 1.0 may be deprecated or behave differently in SuiteScript 2.0.\n- Performance improvements and new features in SS2 may not be realized if scripts are not properly adapted.\n- Ensure that all scheduled scripts, workflows, and integrations are reviewed for compatibility to prevent runtime errors.\n- Documentation and developer training may be necessary to fully leverage SuiteScript 2.0 capabilities.\n\n**Dependency chain**\n- Dep"},"frameworkVersion":{"type":"object","properties":{"type":{"type":"string","description":"type: The type property specifies the category or classification of the framework version within the NetSuite distributed system. It defines the nature or role of the framework version, such as whether it is a major release, minor update, patch, or experimental build. This classification is essential for managing version control, deployment strategies, and compatibility assessments across the distributed environment.\n\n**Field behavior**\n- Determines how the framework version is identified, categorized, and processed within the system.\n- Influences compatibility checks, update mechanisms, and deployment workflows.\n- Enables filtering, sorting, and selection of framework versions based on their type.\n- Affects automated decision-making processes such as rollback, promotion, or deprecation of versions.\n\n**Implementation guidance**\n- Use standardized, predefined values or enumerations to represent different types (e.g., \"major\", \"minor\", \"patch\", \"experimental\") to ensure consistency.\n- Maintain uniform naming conventions and case sensitivity across all framework versions.\n- Implement validation logic to restrict the type property to allowed categories, preventing invalid or unsupported entries.\n- Document any custom or extended types clearly to avoid ambiguity.\n- Ensure that changes to the type property trigger appropriate notifications or logging for audit purposes.\n\n**Examples**\n- \"major\" — indicating a significant release that introduces new features or breaking changes.\n- \"minor\" — representing smaller updates that add enhancements or non-breaking improvements.\n- \"patch\" — for releases focused on bug fixes, security patches, or minor corrections.\n- \"experimental\" — denoting versions under testing, development, or not intended for production use.\n- \"deprecated\" — marking versions that are no longer supported or recommended for use.\n\n**Important notes**\n- The type value directly impacts deployment strategies, including automated rollouts and rollback procedures.\n- Accurate and consistent typing is critical for automated update systems and dependency management tools to function correctly.\n- Changes to the type property should be documented thoroughly and communicated to all relevant stakeholders to avoid confusion.\n- Misclassification can lead to improper handling of versions, potentially causing system instability or incompatibility.\n- The property should be reviewed regularly to align with evolving release management policies.\n\n**Dependency chain**\n- Depends on the version numbering scheme and release management policies defined in the frameworkVersion.\n- Interacts with deployment, update, and compatibility modules that rely on the type classification to determine appropriate actions.\n- Influences compatibility checks with other components and services within the NetSuite distributed environment.\n- May affect logging, monitoring, and alert"},"enum":{"type":"array","items":{"type":"object"},"description":"A list of predefined string values that represent the allowed versions of the framework in the NetSuite distributed environment. This enumeration restricts the frameworkVersion property to accept only specific, valid version identifiers, ensuring consistency and preventing invalid version usage across configurations and API interactions.\n\n**Field behavior**\n- Defines the complete set of permissible values for the frameworkVersion property.\n- Enforces validation by restricting inputs to only those versions listed in the enum.\n- Facilitates consistent version management across different components and services.\n- Typically implemented as an array of strings, where each string corresponds to a valid framework version identifier.\n- Serves as a source of truth for supported framework versions in the system.\n\n**Implementation guidance**\n- Populate the enum with all currently supported framework version strings, reflecting official releases.\n- Regularly update the enum to add new versions and deprecate obsolete ones in alignment with release cycles.\n- Use the enum to validate user inputs, API requests, and configuration files to prevent invalid or unsupported versions.\n- Integrate the enum values into UI elements such as dropdown menus or selection lists to guide users in choosing valid versions.\n- Ensure synchronization between the enum values and the system’s version recognition logic to avoid discrepancies.\n- Document any changes to the enum clearly to inform developers and users about version support updates.\n\n**Examples**\n- [\"1.0.0\", \"1.1.0\", \"2.0.0\"]\n- [\"v2023.1\", \"v2023.2\", \"v2024.1\"]\n- [\"stable\", \"beta\", \"alpha\"]\n- [\"release-2023Q2\", \"release-2023Q3\", \"release-2024Q1\"]\n\n**Important notes**\n- Enum values must exactly match the version identifiers recognized by the system, including case sensitivity.\n- Modifications to the enum (adding/removing versions) should be performed carefully to maintain backward compatibility.\n- The enum itself does not specify a default version; default version handling should be managed separately in the system.\n- Consistency in formatting and naming conventions of version strings within the enum is critical to avoid confusion.\n- The enum should be treated as authoritative for validation purposes and not overridden by external inputs.\n\n**Dependency chain**\n- Used by the frameworkVersion property to restrict allowed values.\n- Relied upon by validation logic in APIs and configuration parsers.\n- Integrated with UI components for version selection.\n- Maintained in coordination with the system’s version management"},"lowercase":{"type":"object","description":"lowercase: Specifies whether the framework version string should be converted to lowercase characters to ensure consistent casing across outputs and integrations.\n\n**Field behavior**\n- When set to `true`, the framework version string is converted entirely to lowercase characters.\n- When set to `false` or omitted, the framework version string retains its original casing as provided.\n- Influences how the framework version is displayed in logs, API responses, configuration files, or any output where the version string is used.\n- Does not modify the content or structure of the version string, only its letter casing.\n\n**Implementation guidance**\n- Accept only boolean values (`true` or `false`) for this property.\n- Perform the lowercase transformation after the framework version string is generated or retrieved but before it is output, stored, or transmitted.\n- Default behavior should be to preserve the original casing if this property is not explicitly set.\n- Use this property to maintain consistency in environments where case sensitivity affects processing or comparison of version strings.\n- Ensure that any caching or storage mechanisms reflect the transformed casing if this property is enabled.\n\n**Examples**\n- `true` — The framework version `\"V1.2.3\"` becomes `\"v1.2.3\"`.\n- `true` — The framework version `\"v1.2.3\"` remains `\"v1.2.3\"` (already lowercase).\n- `false` — The framework version `\"V1.2.3\"` remains `\"V1.2.3\"`.\n- Property omitted — The framework version string is output exactly as originally provided.\n\n**Important notes**\n- This property only affects letter casing; it does not alter the version string’s format, numeric values, or other characters.\n- Downstream systems or integrations that consume the version string should be verified to handle the casing appropriately.\n- Changing the casing may impact string equality checks or version comparisons if those are case-sensitive.\n- Consider the implications on logging, monitoring, or auditing systems that may rely on exact version string matches.\n\n**Dependency chain**\n- Depends on the presence of a valid framework version string to apply the transformation.\n- Should be evaluated after the framework version is fully constructed or retrieved.\n- May interact with other formatting or normalization properties related to the framework version.\n\n**Technical details**\n- Implemented as a boolean flag within the `frameworkVersion` configuration object.\n- Transformation typically uses standard string lowercase functions provided by the programming environment.\n- Should be applied consistently across"}},"description":"frameworkVersion: The specific version identifier of the software framework used within the NetSuite distributed environment. This version string is essential for tracking the exact iteration of the framework deployed, performing compatibility checks between distributed components, and ensuring consistency across the system. It typically follows semantic versioning or a similar structured versioning scheme to convey major, minor, and patch-level changes, including pre-release or build metadata when applicable.\n\n**Field behavior**\n- Represents the precise version of the software framework currently in use within the distributed environment.\n- Serves as a key reference for verifying compatibility between different components and services.\n- Facilitates debugging, support, and audit processes by clearly identifying the framework iteration.\n- Typically adheres to semantic versioning (e.g., MAJOR.MINOR.PATCH) or a comparable versioning format.\n- Remains stable and immutable once a deployment is finalized to ensure traceability.\n\n**Implementation guidance**\n- Maintain a consistent and standardized version string format, such as \"1.2.3\", \"v2.0.0\", or date-based versions like \"2024.06.01\".\n- Update this property promptly whenever the framework undergoes upgrades, patches, or significant changes.\n- Validate the version string against a predefined list of supported or recognized framework versions to prevent errors.\n- Integrate this property into deployment automation, monitoring, and logging tools to verify correct framework usage.\n- Avoid modifying the frameworkVersion post-deployment to maintain historical accuracy and supportability.\n\n**Examples**\n- \"1.0.0\"\n- \"2.3.5\"\n- \"v3.1.0-beta\"\n- \"2024.06.01\"\n- \"1.4.0-rc1\"\n\n**Important notes**\n- Accurate frameworkVersion values are critical to prevent compatibility issues and runtime failures in distributed systems.\n- Missing, incorrect, or inconsistent version identifiers can lead to deployment errors, integration problems, or difficult-to-trace bugs.\n- This property should be treated as a source of truth for framework versioning within the NetSuite distributed environment.\n- Coordination with other versioning properties (e.g., applicationVersion, apiVersion) is important for holistic version management.\n\n**Dependency chain**\n- Dependent on the overarching NetSuite distributed system versioning and release management strategy.\n- Closely related to other versioning properties such as applicationVersion and apiVersion for comprehensive compatibility checks.\n- Influences deployment pipelines, runtime environment validations, and compatibility enforcement"}},"description":"Indicates whether the transaction or record is distributed across multiple departments, locations, classes, or subsidiaries within the NetSuite system, allowing for detailed allocation of amounts or quantities for financial tracking and reporting purposes.\n\n**Field behavior**\n- Specifies if the transaction’s amounts or quantities are allocated across multiple organizational segments such as departments, locations, classes, or subsidiaries.\n- When set to `true`, the transaction supports detailed distribution, enabling granular financial analysis and reporting.\n- When set to `false` or omitted, the transaction is treated as assigned to a single segment without any distribution.\n- Influences how the transaction data is processed, posted, and reported within NetSuite’s financial modules.\n\n**Implementation guidance**\n- Use this boolean field to indicate whether a transaction involves distributed allocations.\n- When `distributed` is `true`, ensure that corresponding distribution details (e.g., department, location, class, subsidiary allocations) are provided in related fields to fully define the distribution.\n- Validate that the sum of all distributed amounts or quantities equals the total transaction amount to maintain data integrity.\n- Confirm that the specific NetSuite record type supports distribution before setting this field to `true`.\n- Handle this field carefully in integrations to avoid discrepancies in accounting or reporting.\n\n**Examples**\n- `distributed: true` — The transaction amounts are allocated across multiple departments and locations.\n- `distributed: false` — The transaction is assigned to a single department without any distribution.\n- Omitted `distributed` field — Defaults to non-distributed transaction behavior.\n\n**Important notes**\n- Enabling distribution (`distributed: true`) often requires additional detailed data to specify how amounts are allocated.\n- Not all transaction or record types in NetSuite support distribution; verify compatibility beforehand.\n- Incorrect or incomplete distribution data can lead to accounting errors or integration failures.\n- Distribution affects financial reporting and posting; ensure consistency across related fields.\n\n**Dependency chain**\n- Commonly used alongside fields specifying distribution details such as `department`, `location`, `class`, and `subsidiary`.\n- May impact related posting, reporting, and reconciliation processes within NetSuite’s financial modules.\n- Dependent on the transaction type’s capability to support distributed allocations.\n\n**Technical details**\n- Data type: Boolean (`true` or `false`).\n- Default behavior when omitted is typically `false` (non-distributed).\n- Must be synchronized with distribution detail records to ensure accurate financial data.\n- Changes to this field may trigger validation or"},"getList":{"type":"object","properties":{"type":{"type":"array","items":{"type":"object"},"description":"type: >\n  Specifies the category or classification of the records to be retrieved from the NetSuite system.\n  This property determines the type of entities that the getList operation will query and return.\n  It defines the scope of the data retrieval by indicating which NetSuite record type the API should target.\n  **Field behavior**\n  - Defines the specific record type to fetch, such as customers, transactions, or items.\n  - Influences the structure, fields, and format of the returned data based on the selected record type.\n  - Must be set to a valid NetSuite record type identifier recognized by the API.\n  - Directly impacts the filtering, sorting, and pagination capabilities available for the query.\n  **Implementation guidance**\n  - Use predefined constants or enumerations representing NetSuite record types to avoid errors and ensure consistency.\n  - Validate the type value before making the API call to confirm it corresponds to a supported and accessible record type.\n  - Consider user permissions and roles associated with the record type to ensure the API caller has appropriate access rights.\n  - Review NetSuite documentation for the exact record type identifiers and their expected behaviors.\n  - When possible, test with sample queries to verify the returned data matches expectations for the specified type.\n  **Examples**\n  - \"customer\"\n  - \"salesOrder\"\n  - \"inventoryItem\"\n  - \"employee\"\n  - \"vendor\"\n  - \"purchaseOrder\"\n  **Important notes**\n  - Incorrect or unsupported type values will result in API errors, empty responses, or unexpected data structures.\n  - The type property directly affects query performance, response size, and the complexity of the returned data.\n  - Some record types may require additional filters, parameters, or specific permissions to retrieve meaningful or complete data.\n  - Changes in NetSuite schema or API versions may introduce new record types or deprecate existing ones; keep the type values up to date.\n  **Dependency chain**\n  - The 'type' property is a required input for the NetSuite.getList operation.\n  - The value of 'type' determines the schema, fields, and structure of the records returned in the response.\n  - Other properties, filters, or parameters in the getList operation may depend on or vary according to the specified 'type'.\n  - Validation and error handling mechanisms rely on the correctness of the 'type' value.\n  **Technical details**\n  - Accepts string values corresponding"},"typeId":{"type":"string","description":"typeId: >\n  The unique identifier representing the specific type of record or entity to be retrieved in the NetSuite getList operation.\n  **Field behavior**\n  - Specifies the category or type of records to fetch from NetSuite.\n  - Determines the schema and fields available in the returned records.\n  - Must correspond to a valid NetSuite record type identifier.\n  **Implementation guidance**\n  - Use predefined NetSuite record type IDs as per NetSuite documentation.\n  - Validate the typeId before making the API call to avoid errors.\n  - Ensure the typeId aligns with the permissions and roles of the API user.\n  **Examples**\n  - \"customer\" for customer records.\n  - \"salesOrder\" for sales order records.\n  - \"employee\" for employee records.\n  **Important notes**\n  - Incorrect or unsupported typeId values will result in API errors.\n  - The typeId is case-sensitive and must match NetSuite's expected values.\n  - Changes in NetSuite's API or record types may affect valid typeId values.\n  **Dependency chain**\n  - Depends on the NetSuite record types supported by the account.\n  - Influences the structure and content of the getList response.\n  **Technical details**\n  - Typically a string value representing the internal NetSuite record type.\n  - Used as a parameter in the getList API endpoint to filter records.\n  - Must be URL-encoded if used in query parameters."},"internalId":{"type":"string","description":"Unique identifier assigned internally to an entity within the NetSuite system. This identifier is used to reference and retrieve specific records programmatically.\n\n**Field behavior**\n- Serves as the primary key for identifying records in NetSuite.\n- Used in API calls to fetch, update, or delete specific records.\n- Immutable once assigned to a record.\n- Typically a numeric or alphanumeric string.\n\n**Implementation guidance**\n- Must be provided when performing operations that require precise record identification.\n- Should be validated to ensure it corresponds to an existing record before use.\n- Avoid exposing internalId values in public-facing contexts to maintain data security.\n- Use in conjunction with other identifiers if necessary to ensure correct record targeting.\n\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"A1B2C3\"\n\n**Important notes**\n- internalId is unique within the scope of the record type.\n- Different record types may have overlapping internalId values; always confirm the record type context.\n- Not user-editable; assigned by NetSuite upon record creation.\n- Essential for batch operations where multiple records are processed by their internalIds.\n\n**Dependency chain**\n- Dependent on the record type being accessed or manipulated.\n- Used alongside other API parameters such as record type or externalId for comprehensive identification.\n\n**Technical details**\n- Data type: string (often numeric but can include alphanumeric characters).\n- Read-only from the API consumer perspective.\n- Returned in API responses when querying records.\n- Used as a key parameter in getList, get, update, and delete API operations."},"externalId":{"type":"string","description":"A unique identifier assigned to an entity or record by an external system, used to reference or synchronize data between systems.\n\n**Field behavior**\n- Serves as a unique key to identify records originating outside the current system.\n- Used to retrieve, update, or synchronize records with external systems.\n- Typically immutable once assigned to maintain consistent references.\n- May be optional or required depending on the integration context.\n\n**Implementation guidance**\n- Ensure the externalId is unique within the scope of the external system.\n- Validate the format and length according to the external system’s specifications.\n- Use this field to map or link records between the local system and external sources.\n- Handle cases where the externalId might be missing or duplicated gracefully.\n- Document the source system and context for the externalId to avoid ambiguity.\n\n**Examples**\n- \"INV-12345\" (Invoice number from an accounting system)\n- \"CRM-987654321\" (Customer ID from a CRM platform)\n- \"EXT-USER-001\" (User identifier from an external user management system)\n\n**Important notes**\n- The externalId is distinct from the internal system’s primary key or record ID.\n- Changes to the externalId can disrupt synchronization and should be avoided.\n- When integrating multiple external systems, ensure externalIds are namespaced or otherwise differentiated.\n- Not all records may have an externalId if they originate solely within the local system.\n\n**Dependency chain**\n- Often used in conjunction with other identifiers like internal IDs or system-specific keys.\n- May depend on authentication or authorization to access external system data.\n- Relies on consistent data synchronization processes to maintain accuracy.\n\n**Technical details**\n- Typically represented as a string data type.\n- May include alphanumeric characters, dashes, or underscores.\n- Should be indexed in databases for efficient lookup.\n- May require encoding or escaping if used in URLs or queries."},"_id":{"type":"object","description":"_id: The unique identifier for a record within the NetSuite system. This identifier is used to retrieve, update, or reference specific records in API operations.\n**Field behavior**\n- Serves as the primary key for records in NetSuite.\n- Must be unique within the context of the record type.\n- Used to fetch or manipulate specific records via API calls.\n- Immutable once assigned to a record.\n**Implementation guidance**\n- Ensure the _id is correctly captured from NetSuite responses when retrieving records.\n- Validate the _id format as per NetSuite’s specifications before using it in requests.\n- Use the _id to perform precise operations such as updates or deletions.\n- Handle cases where the _id may not be present or is invalid gracefully.\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"987654321\"\n**Important notes**\n- The _id is critical for identifying records uniquely; incorrect usage can lead to data inconsistencies.\n- Do not generate or alter _id values manually; always use those provided by NetSuite.\n- The _id is typically a numeric string but confirm with the specific NetSuite record type.\n**Dependency chain**\n- Dependent on the record type being accessed or manipulated.\n- Used in conjunction with other record fields for comprehensive data operations.\n**Technical details**\n- Typically represented as a string or numeric value.\n- Returned in API responses when listing or retrieving records.\n- Required in API requests for operations targeting specific records."}},"description":"getList: Retrieves a list of records from the NetSuite system based on specified criteria and parameters. This operation enables fetching multiple records in a single request, optimizing data retrieval and processing efficiency. It supports filtering, sorting, and pagination to manage large datasets effectively, and returns both the matching records and relevant metadata about the query results.\n\n**Field behavior**\n- Accepts parameters defining the record type to retrieve, along with optional filters, search criteria, and sorting options.\n- Returns a collection (list) of records that match the specified criteria.\n- Supports pagination by allowing clients to specify limits and offsets or use tokens to navigate through large result sets.\n- Includes metadata such as total record count, current page number, and page size to facilitate client-side data handling.\n- May return partial data if the dataset exceeds the maximum allowed records per request.\n- Handles cases where no records match the criteria by returning an empty list with appropriate metadata.\n\n**Implementation guidance**\n- Require explicit specification of the record type to ensure accurate data retrieval.\n- Validate and sanitize all input parameters (filters, sorting, pagination) to prevent malformed queries and optimize performance.\n- Implement robust pagination logic to allow clients to retrieve subsequent pages seamlessly.\n- Provide clear error messages and status codes for scenarios such as invalid parameters, unauthorized access, or record type not found.\n- Ensure compliance with NetSuite API rate limits and handle throttling gracefully.\n- Support common filter operators (e.g., equals, contains, greater than) consistent with NetSuite’s search capabilities.\n- Return consistent and well-structured response formats to facilitate client parsing and integration.\n\n**Examples**\n- Retrieving a list of customer records filtered by status (e.g., Active) and creation date range.\n- Fetching a batch of sales orders placed within a specific date range, sorted by order date descending.\n- Obtaining a paginated list of inventory items filtered by category and availability status.\n- Requesting the first 50 employee records with a specific job title, then fetching subsequent pages as needed.\n- Searching for vendor records containing a specific keyword in their name or description.\n\n**Important notes**\n- The maximum number of records returned per request is subject to NetSuite API limits, which may require multiple paginated requests for large datasets.\n- Proper authentication and authorization are mandatory to access the requested records; insufficient permissions will result in access errors.\n- The structure and fields of the returned records vary depending on the specified record type and the fields requested or defaulted"},"searchPreferences":{"type":"object","properties":{"bodyFieldsOnly":{"type":"boolean","description":"bodyFieldsOnly indicates whether the search results should include only the body fields of the records, excluding any joined or related record fields. This setting controls the scope of data returned by the search operation, allowing for more focused and efficient retrieval when only the main record's fields are necessary.\n\n**Field behavior**\n- When set to true, search results will include only the fields that belong directly to the main record (body fields), excluding any fields from joined or related records.\n- When set to false or omitted, search results may include fields from both the main record and any joined or related records specified in the search.\n- Directly affects the volume and detail of data returned, potentially reducing payload size and improving performance.\n- Influences how the search engine processes and compiles the result set, limiting it to primary record data when enabled.\n\n**Implementation guidance**\n- Use this property to optimize search performance and reduce data transfer when only the main record's fields are required.\n- Set to true to minimize payload size, which is beneficial for large datasets or bandwidth-sensitive environments.\n- Verify that your search criteria and downstream processing do not require any joined or related record fields before enabling this option.\n- If joined fields are necessary for your application logic, keep this property false or unset to ensure complete data retrieval.\n- Consider this setting in conjunction with other search preferences like pageSize and returnSearchColumns for optimal results.\n\n**Examples**\n- `bodyFieldsOnly: true` — returns only the main record’s body fields in search results, excluding any joined record fields.\n- `bodyFieldsOnly: false` — returns both body fields and fields from joined or related records as specified in the search.\n- Omitted `bodyFieldsOnly` property — defaults to false behavior, including joined fields if requested.\n\n**Important notes**\n- Enabling bodyFieldsOnly may omit critical related data if your search logic depends on joined fields, potentially impacting application functionality.\n- This setting is particularly useful for improving performance and reducing data size in scenarios where joined data is unnecessary.\n- Not all record types or search operations may support this preference; verify compatibility with your specific use case.\n- Changes to this setting can affect the structure and completeness of search results, so test thoroughly when modifying.\n\n**Dependency chain**\n- This property is part of the `searchPreferences` object within the NetSuite API request.\n- It influences the fields returned by the search operation, affecting both data scope and payload size.\n- May"},"pageSize":{"type":"number","description":"pageSize specifies the number of search results to be returned per page in a paginated search response. This property controls the size of each page of results when performing searches, enabling efficient handling and retrieval of large datasets by dividing them into manageable chunks. By adjusting pageSize, clients can balance between the volume of data received per request and the performance implications of processing large result sets.\n\n**Field behavior**\n- Determines the maximum number of records returned in a single page of search results.\n- Directly influences the pagination mechanism by setting how many items appear on each page.\n- Helps optimize network and client performance by limiting the amount of data transferred and processed per response.\n- Affects the total number of pages available, calculated based on the total number of search results divided by pageSize.\n- When pageSize is changed between requests, it may affect the consistency of pagination navigation.\n\n**Implementation guidance**\n- Assign pageSize a positive integer value that balances response payload size and system performance.\n- Ensure the value respects any minimum and maximum limits imposed by the API or backend system.\n- Maintain consistent pageSize values across paginated requests to provide predictable and stable navigation through result pages.\n- Consider client device capabilities, network bandwidth, and expected user interaction patterns when selecting pageSize.\n- Implement validation to prevent invalid or out-of-range values that could cause errors or degraded performance.\n- When dealing with very large datasets, consider smaller pageSize values to reduce memory consumption and improve responsiveness.\n\n**Examples**\n- pageSize: 25 — returns 25 search results per page, suitable for standard list views.\n- pageSize: 100 — returns 100 search results per page, useful for bulk data processing or export scenarios.\n- pageSize: 10 — returns 10 search results per page, ideal for quick previews or limited bandwidth environments.\n- pageSize: 50 — a moderate setting balancing data volume and performance for typical use cases.\n\n**Important notes**\n- Excessively large pageSize values can increase response times, memory usage, and may lead to timeouts or throttling.\n- Very small pageSize values can cause a high number of API calls, increasing overall latency and server load.\n- The API may enforce maximum allowable pageSize limits; requests exceeding these limits may result in errors or automatic truncation.\n- Changing pageSize mid-pagination can disrupt user experience by altering the number of pages and item offsets.\n- Some APIs may have default pageSize values if none is specified; explicitly setting page"},"returnSearchColumns":{"type":"boolean","description":"returnSearchColumns: Specifies whether the search operation should return the columns (fields) defined in the search results, providing detailed data for each record matching the search criteria.\n\n**Field behavior**\n- Determines if the search response includes the columns specified in the search definition, such as field values and metadata.\n- When set to true, the search results will contain detailed column data for each record, enabling comprehensive data retrieval.\n- When set to false, the search results will omit column data, potentially returning only record identifiers or minimal information.\n- Directly influences the amount of data returned, impacting response payload size and processing time.\n- Affects how client applications can utilize the search results, depending on the presence or absence of column data.\n\n**Implementation guidance**\n- Set to true when detailed search result data is required for processing, reporting, or display purposes.\n- Set to false to optimize performance and reduce bandwidth usage when only record IDs or minimal data are needed.\n- Use in conjunction with other search preference settings (e.g., `pageSize`, `returnSearchRows`) to fine-tune search responses.\n- Ensure client applications are designed to handle both scenarios—presence or absence of column data—to avoid errors or incomplete processing.\n- Consider the trade-off between data completeness and performance when configuring this property.\n\n**Examples**\n- `returnSearchColumns: true` — The search results will include all defined columns for each record, such as names, dates, and custom fields.\n- `returnSearchColumns: false` — The search results will exclude column data, returning only basic record information like internal IDs.\n\n**Important notes**\n- Enabling returnSearchColumns may significantly increase response size and processing time, especially for searches returning many records or columns.\n- Some search operations or API endpoints may require columns to be returned to function correctly or to provide meaningful results.\n- Disabling this option can improve performance but limits the detail available in search results, which may affect downstream processing or user interfaces.\n- Changes to this setting can impact caching, pagination, and sorting behaviors depending on the search implementation.\n\n**Dependency chain**\n- Related to other `searchPreferences` properties such as `pageSize` (controls number of records per page) and `returnSearchRows` (controls whether search rows are returned).\n- Works in tandem with search definition settings that specify which columns are included in the search.\n- May affect or be affected by API-level configurations or limitations on data retrieval and response formatting."}},"description":"searchPreferences: Preferences that control the behavior and parameters of search operations within the NetSuite environment, enabling customization of how search queries are executed and how results are returned to optimize relevance, performance, and user experience.\n\n**Field behavior**\n- Defines the execution parameters for search queries, including pagination, sorting, filtering, and result formatting.\n- Controls the scope, depth, and granularity of data retrieved during search operations.\n- Influences the performance, accuracy, and relevance of search results based on configured preferences.\n- Can be adjusted dynamically to tailor search behavior to specific user roles, contexts, or application requirements.\n- May include settings such as page size limits, sorting criteria, case sensitivity, and filter application.\n\n**Implementation guidance**\n- Utilize this property to fine-tune search operations to meet specific user or application needs, improving efficiency and relevance.\n- Validate all preference values against supported NetSuite search parameters to prevent errors or unexpected behavior.\n- Establish sensible default preferences to ensure consistent and predictable search results when explicit preferences are not provided.\n- Allow dynamic updates to preferences to adapt to changing contexts, such as different user roles or data volumes.\n- Ensure that preference configurations comply with user permissions and role-based access controls to maintain security and data integrity.\n\n**Examples**\n- Setting a page size of 50 to limit the number of records returned per search query for better performance.\n- Enabling case-insensitive search filters to broaden result matching.\n- Specifying sorting order by transaction date in descending order to show the most recent records first.\n- Applying filters to restrict search results to a particular customer segment or date range.\n- Configuring search to exclude inactive records to streamline results.\n\n**Important notes**\n- Misconfiguration of searchPreferences can lead to incomplete, irrelevant, or inefficient search results, negatively impacting user experience.\n- Certain preferences may be restricted or overridden based on user roles, permissions, or API version constraints.\n- Changes to searchPreferences can affect system performance; excessive page sizes or complex filters may increase load times.\n- Always verify compatibility of preference settings with the specific NetSuite API version and environment in use.\n- Consider the impact of preferences on downstream processes that consume search results.\n\n**Dependency chain**\n- Depends on the overall search operation configuration and the specific search type being performed.\n- Interacts with user authentication and authorization settings to enforce access controls on search results.\n- Influences and is influenced by data retrieval mechanisms and indexing strategies within NetSuite.\n- Works in conjunction with"},"file":{"type":"object","description":"Configuration for retrieving files from NetSuite file cabinet and PARSING them into records. Use this for structured file exports (CSV, XML, JSON) where the file content should be parsed into data records.\n\n**Critical:** When to use file vs blob\n- Use `netsuite.file` WITH export `type: null/undefined` for file exports WITH parsing (CSV, XML, JSON)\n- Use `netsuite.blob` WITH export `type: \"blob\"` for raw binary transfers WITHOUT parsing\n\nWhen you want file content to be parsed into individual records, use this `file` configuration and leave the export's `type` field as null or undefined (standard export). Do NOT set `type: \"blob\"` when using this configuration.","properties":{"folderInternalId":{"type":"string","description":"The internal ID of the NetSuite File Cabinet folder from which files will be exported.\n\nSpecify the internal ID for the NetSuite File Cabinet folder from which you want to export your files. If the folder internal ID is required to be dynamic based on the data you are integrating, you can specify the JSON path to the field in your data containing the folder internal ID values instead. For example, {{{myFileField.fileName}}}.\n\n**Field behavior**\n- Identifies the specific folder in NetSuite's file cabinet to export files from\n- Must be a valid internal ID that exists in the NetSuite environment\n- Supports dynamic values using handlebars notation for data-driven folder selection\n- The internal ID is distinct from folder names or paths; it's a stable numeric identifier\n\n**Implementation guidance**\n- Obtain the folderInternalId via NetSuite's UI (File Cabinet > folder properties) or API\n- For static exports, use the numeric internal ID directly (e.g., \"12345\")\n- For dynamic exports, use handlebars syntax to reference a field in your data\n- Verify folder permissions - the integration user must have access to the folder\n- The folderInternalId is distinct from folder names or folder paths; it is a stable numeric identifier used internally by NetSuite.\n- Using an incorrect or non-existent folderInternalId will result in errors or unintended file placement.\n- Folder permissions and user roles can impact the ability to perform operations even with a valid folderInternalId.\n- Folder hierarchy changes do not affect the folderInternalId, ensuring persistent reference integrity.\n\n**Dependency chain**\n- Depends on the existence of the folder within the NetSuite file cabinet.\n- Requires appropriate user permissions to access or modify the folder.\n- Often used in conjunction with file identifiers and other file metadata fields.\n- May be linked to folder creation or folder search operations to retrieve valid IDs.\n\n**Examples**\n- \"12345\" - Static folder internal ID\n- \"67890\" - Another valid folder internal ID\n- \"{{{record.folderId}}}\" - Dynamic folder ID from integration data\n- \"{{{myFileField.fileName}}}\" - Dynamic value from a field in your data\n\n**Important notes**\n- Using an incorrect or non-existent folderInternalId will result in errors\n- Folder hierarchy changes do not affect the folderInternalId\n- Internal IDs may differ between non-production and production environments"},"backupFolderInternalId":{"type":"string","description":"backupFolderInternalId is the internal identifier of the backup folder within the NetSuite file cabinet where backup files are stored. This ID uniquely identifies the folder location used for saving backup files programmatically, ensuring that backup operations target the correct directory within the NetSuite environment.\n\n**Field behavior**\n- Represents a unique internal ID assigned by NetSuite to a specific folder in the file cabinet.\n- Directs backup operations to the designated folder location for storing backup files.\n- Must correspond to an existing and accessible folder within the NetSuite file cabinet.\n- Typically handled as an integer value in API requests and responses.\n- Immutable for a given folder; changing the folder requires updating this ID accordingly.\n\n**Implementation guidance**\n- Verify that the folder with this internal ID exists before initiating backup operations.\n- Confirm that the folder has the necessary permissions to allow writing and managing backup files.\n- Use NetSuite SuiteScript APIs or REST API calls to retrieve and validate folder internal IDs dynamically.\n- Avoid hardcoding the internal ID; instead, use configuration files, environment variables, or administrative settings to maintain flexibility across environments.\n- Implement error handling to manage cases where the folder ID is invalid, missing, or inaccessible.\n- Consider environment-specific IDs for non-production versus production to prevent misdirected backups.\n\n**Examples**\n- 12345\n- 67890\n- 112233\n\n**Important notes**\n- The internal ID is unique per NetSuite account and environment; IDs do not transfer between non-production and production.\n- Deleting or renaming the folder associated with this ID will disrupt backup processes until updated.\n- Proper access rights and permissions are mandatory to write backup files to the specified folder.\n- Changes to folder structure or permissions should be coordinated with backup scheduling to avoid failures.\n- This property is critical for ensuring backup data integrity and recoverability within NetSuite.\n\n**Dependency chain**\n- Depends on the existence and accessibility of the folder in the NetSuite file cabinet.\n- Interacts with backup scheduling, file naming conventions, and storage management properties.\n- May be linked with authentication and authorization mechanisms controlling file cabinet access.\n- Relies on NetSuite API capabilities to manage and reference file cabinet folders.\n\n**Technical details**\n- Data type: Integer (typically a 32-bit integer).\n- Represents the internal NetSuite folder ID, not the folder name or path.\n- Used in API payloads to specify backup destination folder.\n- Must be retrieved or confirmed via NetSuite SuiteScript"}}}},"required":[]},"RDBMS":{"type":"object","description":"Configuration object for Relational Database Management System (RDBMS) data integration exports.\n\nThis object is REQUIRED when the _connectionId field references an RDBMS database connection\nand must not be included for other connection types. It defines how data is extracted\nfrom relational databases using SQL queries.\n\n**Rdbms export capabilities**\n- Execute custom SQL SELECT statements\n- Support for joins, aggregations, and functions\n- Flexible data retrieval from any accessible tables or views\n- Compatible with all major database systems\n\n**Critical:** WHAT BELONGS IN THIS OBJECT\n- `query` - **ALWAYS REQUIRED** - The SQL SELECT statement\n- `once` - **REQUIRED** when the export's Object Type is `\"once\"` (set _include_once: true)\n- **DO NOT** put `delta` inside this object - delta is handled via the query\n\n**Delta exports (type: \"delta\")**\nFor delta/incremental exports, do NOT populate a `delta` object inside `rdbms`.\nInstead, use `{{lastExportDateTime}}` or `{{currentExportDateTime}}` directly in the query:\n```json\n{\n  \"type\": \"delta\",\n  \"rdbms\": {\n    \"query\": \"SELECT * FROM customers WHERE updatedAt > {{lastExportDateTime}}\"\n  }\n}\n```\n\n**Once exports (type: \"once\")**\nFor once exports (mark records as processed), populate `rdbms.once.query`:\n```json\n{\n  \"type\": \"once\",\n  \"rdbms\": {\n    \"query\": \"SELECT * FROM customers WHERE exported = false\",\n    \"once\": {\n      \"query\": \"UPDATE customers SET exported = true WHERE id = {{record.id}}\"\n    }\n  }\n}\n```\n\n**Standard exports (type: null or not specified)**\nJust provide the query:\n```json\n{\n  \"rdbms\": {\n    \"query\": \"SELECT * FROM customers WHERE status = 'ACTIVE'\"\n  }\n}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - SQL SELECT query for retrieving data from the database.\n\nThis field contains the SQL SELECT statement that will be executed to fetch data\nfrom the database. The query can range from simple table selections to complex\njoins and aggregations.\n\nExamples:\n- Basic: `SELECT id, name, email FROM customers WHERE status = 'ACTIVE'`\n- Join: `SELECT o.id, c.name, o.amount FROM orders o JOIN customers c ON o.customer_id = c.id`\n- Aggregate: `SELECT category, COUNT(*) as count FROM orders GROUP BY category`\n- Parameterized: `SELECT * FROM orders WHERE customer_id = {{record.customer_id}}`\n"},"once":{"type":"object","description":"**CRITICAL: REQUIRED when the export's Object Type is \"once\".**\n\nIf Object Type is \"once\", you MUST set _include_once to true (or include this object).\n\nThis object has ONLY ONE property: \"query\" (a SQL UPDATE string).\nDO NOT create any other properties like \"update\", \"table\", \"set\", \"where\", etc.\n\nCORRECT format:\n```json\n{\"query\": \"UPDATE customers SET exported=true WHERE id={{record.id}}\"}\n```\n\nWRONG format (DO NOT DO THIS):\n```json\n{\"update\": {\"table\": \"customers\", \"set\": {...}}}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - A SQL UPDATE statement string to mark records as processed.\n\nThis is a plain SQL UPDATE query string, NOT a structured object.\n\nThe query runs AFTER each record is successfully exported, setting a flag\nto indicate the record has been processed.\n\nFormat: \"UPDATE <table> SET <column>=<value> WHERE <id_column>={{record.<id_field>}}\"\n\nExample: \"UPDATE customers SET exported=true WHERE id={{record.id}}\"\n\nThe {{record.id}} placeholder is replaced with the actual record ID from each exported row.\n"}}}}},"S3":{"type":"object","description":"Configuration object for Amazon S3 (Simple Storage Service) data integration exports.\n\nThis object is REQUIRED when the _connectionId field references an AWS S3 connection\nand must not be included for other connection types. It defines how files are retrieved\nfrom S3 buckets for processing in integrations.\n\nThe S3 export object has the following requirements:\n\n- Required fields: region, bucket\n- Optional fields: keyStartsWith, keyEndsWith, backupBucket, keyPrefix\n\n**Purpose**\n\nThis configuration specifies:\n- Which S3 bucket to retrieve files from\n- How to filter files by key patterns\n- Where to move files after retrieval (optional)\n","properties":{"region":{"type":"string","description":"The AWS region where the S3 bucket is located.\n\n- REQUIRED for all S3 exports\n- Must be a valid AWS region identifier (e.g., us-east-1, eu-west-1)\n- Case-insensitive (will be normalized to lowercase)\n"},"bucket":{"type":"string","description":"The S3 bucket name to retrieve files from.\n\n- REQUIRED for all S3 exports\n- Must be a valid existing S3 bucket name\n- Globally unique across all AWS accounts\n- AWS credentials must have s3:ListBucket and s3:GetObject permissions\n"},"keyStartsWith":{"type":"string","description":"Optional prefix filter for S3 object keys.\n\n- Filters files based on the beginning of their keys\n- Functions as a directory path in S3's flat storage structure\n- Case-sensitive (S3 keys are case-sensitive)\n- Examples:\n  - `\"exports/\"` - retrieves files in the exports \"directory\"\n  - `\"customer/orders/2023/\"` - retrieves files in this nested path\n  - `\"invoice_\"` - retrieves files starting with \"invoice_\"\n\nWhen used with keyEndsWith, files must match both criteria.\n"},"keyEndsWith":{"type":"string","description":"Optional suffix filter for S3 object keys.\n\n- Commonly used to filter by file extension\n- Case-sensitive (S3 keys are case-sensitive)\n- Examples:\n  - `\".csv\"` - retrieves only CSV files\n  - `\"_FINAL.xml\"` - retrieves only XML files with _FINAL suffix\n  - `\"_READY\"` - retrieves files with status indicator\n\nWhen used with keyStartsWith, files must match both criteria.\n"},"backupBucket":{"type":"string","description":"Optional destination bucket where files are moved before deletion.\n\n- If omitted, files are deleted from the source bucket after successful export\n- Must be a valid existing S3 bucket in the same region\n- AWS credentials must have s3:PutObject permissions on this bucket\n- Provides an independent backup of exported files\n\nIMPORTANT: Celigo automatically deletes files from the source bucket after\nsuccessful export. The backup bucket is for users who want to maintain their\nown independent backup of exported files. Celigo also maintains its own backup\nof processed files for a set period of time.\n"},"keyPrefix":{"type":"string","description":"Optional prefix to prepend to keys when moving to backup bucket.\n\n- Used only when backupBucket is specified\n- Prepended to the original filename when moved to backup\n- Can contain static text or handlebars templates\n- Examples:\n  - `\"processed/\"` - places files under a processed folder\n  - `\"archive/{{date 'YYYY-MM-DD'}}/\"` - organizes by date\n\nIMPORTANT: The original file's directory structure is not preserved. Only the\nfilename is appended to this prefix in the backup location.\n"}}},"Wrapper":{"type":"object","description":"Configuration for Wrapper exports","properties":{"function":{"type":"string","description":"Function name to invoke in the wrapper export"},"configuration":{"type":"object","description":"Wrapper-specific configuration payload","additionalProperties":true}}},"Parsers":{"type":"array","description":"Configuration for parsing XML payloads (i.e. files, HTTP responses, etc.). Use this field when you need to process XML data\nand transform it into a JSON records.\n\n**Implementation notes**\n\n- This is where you configure how to parse XML data in your resource\n- Although defined as an array, you typically only need a single parser configuration\n- Currently only XML parsing is supported\n- Only configure this field when working with XML data that needs structured parsing\n","items":{"type":"object","properties":{"version":{"type":"string","description":"Version identifier for the parser configuration format. Currently only version \"1\" is supported.\n\nAlways set this field to \"1\" as it's the only supported version at this time.\n","enum":["1"]},"type":{"type":"string","description":"Defines the type of parser to use. Currently only \"xml\" is supported.\n\nWhile the system is designed to potentially support multiple parser types in the future,\nat this time only XML parsing is implemented, so this field must be set to \"xml\".\n","enum":["xml"]},"name":{"type":"string","description":"Optional identifier for the parser configuration. This field is primarily for documentation\npurposes and is not functionally used by the system.\n\nThis field can be omitted in most cases as it's not required for parser functionality.\n"},"rules":{"type":"object","description":"Configuration rules that determine how XML data is parsed and converted to JSON.\nThese settings control the structure and format of the resulting JSON records.\n\n**Parsing options**\n\nThere are two main parsing strategies available:\n- **Automatic parsing**: Simple but produces more complex output\n- **Custom parsing**: More control over the resulting JSON structure\n","properties":{"V0_json":{"type":"boolean","description":"Controls the XML parsing strategy.\n\n- When set to **true** (Automatic): XML data is automatically converted to JSON without\n  additional configuration. This is simpler to set up but typically produces more complex\n  and deeply nested JSON that may be harder to work with.\n\n- When set to **false** (Custom): Gives you more control over how the XML is converted to JSON.\n  This requires additional configuration (like listNodes) but produces cleaner, more\n  predictable JSON output.\n\nMost implementations use the Custom approach (false) for better control over the output format.\n"},"listNodes":{"type":"array","description":"Specifies which XML nodes should be treated as arrays (lists) in the output JSON.\n\nIt's not always possible to automatically determine if an XML node should be a single value\nor an array. Use this field to explicitly identify nodes that should be treated as arrays,\neven if they appear only once in the XML.\n\nEach entry should be a simplified XPath expression pointing to the node that should be\ntreated as an array.\n\nOnly relevant when V0_json is set to false (Custom parsing).\n","items":{"type":"string"}},"includeNodes":{"type":"array","description":"Limits which XML nodes are included in the output JSON.\n\nFor large XML documents, you can use this field to extract only the nodes you need,\nreducing the size and complexity of the resulting JSON. Only nodes specified here\n(and their children) will be included in the output.\n\nEach entry should be a simplified XPath expression pointing to nodes to include.\n\nOnly relevant when V0_json is set to false (Custom parsing).\n","items":{"type":"string"}},"excludeNodes":{"type":"array","description":"Specifies which XML nodes should be excluded from the output JSON.\n\nSometimes it's easier to specify which nodes to exclude rather than which to include.\nUse this field to identify nodes that should be omitted from the output JSON.\n\nEach entry should be a simplified XPath expression pointing to nodes to exclude.\n\nOnly relevant when V0_json is set to false (Custom parsing).\n","items":{"type":"string"}},"stripNewLineChars":{"type":"boolean","description":"Controls whether newline characters are removed from text values.\n\nWhen set to true, all newline characters (\\n, \\r, etc.) will be removed from\ntext content in the XML before conversion to JSON.\n","default":false},"trimSpaces":{"type":"boolean","description":"Controls whether leading and trailing whitespace is trimmed from text values.\n\nWhen set to true, all values will have leading and trailing whitespace removed\nbefore conversion to JSON.\n","default":false},"attributePrefix":{"type":"string","description":"Specifies a character sequence to prepend to XML attribute names when converted to JSON properties.\n\nIn XML, both elements and attributes can exist at the same level, but in JSON this distinction is lost.\nTo maintain the distinction between element data and attribute data in the resulting JSON, this prefix\nis added to attribute names during conversion.\n\nFor example, with attributePrefix set to \"Att-\" and an XML element like:\n```xml\n<product id=\"123\">Laptop</product>\n```\n\nThe resulting JSON would be:\n```json\n{\n  \"product\": \"Laptop\",\n  \"Att-id\": \"123\"\n}\n```\n\nThis helps maintain the distinction between element content and attribute values in the\nconverted JSON, making it easier to reference specific data in downstream processing steps.\n"},"textNodeName":{"type":"string","description":"Specifies the property name to use for element text content when an element has both\ntext content and child elements or attributes.\n\nWhen an XML element contains both text content and other nested elements or attributes,\nthis field determines what property name will hold the text content in the resulting JSON.\n\nFor example, with textNodeName set to \"value\" and an XML element like:\n```xml\n<item id=\"123\">\n  Laptop\n  <category>Electronics</category>\n</item>\n```\n\nThe resulting JSON would be:\n```json\n{\n  \"item\": {\n    \"value\": \"Laptop\",\n    \"category\": \"Electronics\",\n    \"id\": \"123\"\n  }\n}\n```\n\nThis allows for unambiguous parsing of complex XML structures that mix text content with\nchild elements. Choose a name that's unlikely to conflict with actual element names in your XML.\n"}}}}}},"Mappings":{"type":"array","description":"Array of field mapping configurations for transforming data from one format into another.\n\n**Guidance**\n\nThis schema is designed around RECURSION as its core architectural principle. Understanding this recursive\nnature is essential for building effective mappings:\n\n1. The schema is self-referential by design - a mapping can contain nested mappings of the same structure\n2. Complex data structures (nested objects, arrays of objects, arrays of arrays of objects) are ALL\n   handled through this recursive pattern\n3. Each mapping handles one level of the data structure; deeper levels are handled by nested mappings\n\nWhen generating mappings programmatically:\n- For simple fields (string, number, boolean): Create single mapping objects\n- For objects: Create a parent mapping with nested 'mappings' array containing child field mappings\n- For arrays: Use 'buildArrayHelper' with extract paths defining array inputs and\n  recursive 'mappings' to define object structures\n\nThe system will process these nested structures recursively during runtime, ensuring proper construction\nof complex hierarchical data while maintaining excellent performance.\n","items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}}},"items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}},"Transform":{"type":"object","description":"Configuration for transforming data during processing operations. This object enables\nreshaping of records.\n\n**Transformation capabilities**\n\nCeligo's transformation engine offers powerful features for data manipulation:\n- Precise field mapping with JSONPath expressions\n- Support for any level of nested arrays\n- Formula-based field value generation\n- Dynamic references to flow and integration settings\n\n**Implementation approaches**\n\nThere are two distinct transformation mechanisms available:\n\n**Rule-Based Transformation (`type: \"expression\"`)**\n- **Best For**: Most transformation scenarios from simple to complex\n- **Capabilities**: Field mapping, formula calculations, lookups, nested data handling\n- **Advantages**: Visual configuration, no coding required, intuitive interface\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear mapping requirements or need to reshape data structure\n\n**Script-Based Transformation (`type: \"script\"`)**\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Capabilities**: Full programmatic control, custom processing, complex business rules\n- **Advantages**: Maximum flexibility, can implement any transformation logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Visual transformation tools aren't sufficient for your use case\n","properties":{"type":{"type":"string","description":"Determines which transformation mechanism to use. This choice affects which properties\nmust be configured and how transformation logic is implemented.\n\n**Available types**\n\n**Rule-Based Transformation (`\"expression\"`)**\n- **Required Config**: The `expression` object with mapping definitions\n- **Behavior**: Applies declarative rules to reshape data\n- **Best For**: Most transformation scenarios from simple to complex\n- **Advantages**: Visual configuration, no coding required\n\n**Script-Based Transformation (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to transform data\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard data transformations, use `\"expression\"`\n2. For complex logic or specialized processing, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based transformations. This object enables reshaping data\nwithout requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define transformation rules that\ncan map, modify, and generate data elements.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Field mappings define how input data is transformed to target fields\n- Formulas can be used to calculate or generate new values\n- Lookups can enrich data by fetching related information\n- Mode determines how records are processed (create new or modify existing)\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"2\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"2\"\nfor current implementations.\n","enum":["2"]},"rulesTwoDotZero":{"type":"object","description":"Configuration for version 2 transformation rules. This object contains the core logic\nfor how data is mapped, enriched, and transformed.\n\n**Capabilities**\n\nTransformation 2.0 provides:\n- Precise field mapping with JSONPath expressions\n- Support for deeply nested data structures\n- Formula-based field generation\n- Dynamic lookups for data enrichment\n- Multiple operating modes to fit different scenarios\n","properties":{"mode":{"type":"string","description":"Transformation mode that determines how records are handled during processing.\n\n**Available modes**\n\n**Create Mode (`\"create\"`)**\n- **Behavior**: Builds entirely new output records from inputs\n- **Use When**: Output structure differs significantly from input\n- **Advantage**: Clean slate approach, no field inheritance\n\n**Modify Mode (`\"modify\"`)**\n- **Behavior**: Makes targeted edits to existing records\n- **Use When**: Output structure should remain similar to input\n- **Advantage**: Preserves unmapped fields from the original record\n","enum":["create","modify"]},"mappings":{"$ref":"#/components/schemas/Mappings"},"lookups":{"allOf":[{"description":"Shared lookup tables used across all mappings defined in the transformation rules.\n\n**Purpose**\n\nLookups provide centralized value translation that can be referenced from any mapping\nin your transformation configuration. They enable consistent translation of codes, IDs,\nand values between systems without duplicating translation logic.\n\n**Usage in transformations**\n\nLookups are particularly valuable in transformations for:\n\n- **Data Normalization**: Standardizing values from diverse source systems\n- **Code Translation**: Converting between different coding systems (e.g., status codes)\n- **Field Enrichment**: Adding descriptive values based on ID or code lookups\n- **Cross-Reference Resolution**: Mapping identifiers between integrated systems\n\n**Implementation**\n\nLookups are defined once in this array and referenced by name in mappings:\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"statusMapping\",\n    \"map\": {\n      \"A\": \"Active\",\n      \"I\": \"Inactive\",\n      \"P\": \"Pending\"\n    },\n    \"default\": \"Unknown Status\"\n  }\n]\n```\n\nThen referenced in mappings using the lookupName property:\n\n```json\n{\n  \"generate\": \"status\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.statusCode\",\n  \"lookupName\": \"statusMapping\"\n}\n```\n\nThe system automatically applies the lookup during transformation processing.\n\nFor complete details on lookup properties and behavior, see the Lookups schema.\n"},{"$ref":"#/components/schemas/Lookups"}]}}}}},"script":{"type":"object","description":"Configuration for programmable script-based transformations. This object enables complex, custom\ntransformation logic beyond what expression-based transformations can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to transform data according to\nspecialized business rules or complex algorithms.\n\n**Implementation approach**\n\nScript-based transformation works by:\n1. Executing the specified function from the referenced script\n2. Passing input data to the function\n3. Using the function's return value as the transformed output\n\n**Common use cases**\n\nScript transformation is ideal for:\n- Complex business logic that can't be expressed through mappings\n- Algorithmic transformations requiring computation\n- Dynamic transformations based on external factors\n- Legacy system data format compatibility\n- Multi-stage processing with intermediate steps\n\nOnly use script-based transformation when expression-based transformation is insufficient.\nScript transformation requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the transformation logic.\n\nThe referenced script should contain the function specified in the\n'function' property.\n","format":"objectid"},"function":{"type":"string","description":"Name of the function within the script to execute for transformation. This function\nmust exist in the script referenced by _scriptId.\n"}}}}},"Lookups":{"type":"array","description":"Configuration for value-to-value transformations using lookup tables.\n\n**Purpose**\n\nLookups provide a way to translate values from one system to another. They transform\ninput values into output values using either static mapping tables or\ndynamic lookup caches.\n\n**Lookup mechanisms**\n\nThere are two distinct lookup mechanisms available:\n\n1. **Static Lookups**: Define a simple key-value map object and store it as part of your resource\n   - Best for: Small, fixed sets of values that rarely change\n   - Implementation: Configure the `map` object with input-to-output value mappings\n   - Example: Country codes, status values, simple translations\n\n2. **Dynamic Lookups**: Reference an existing 'Lookup Cache' resource in your Celigo account\n   - Best for: Large datasets, frequently changing values, or complex reference data\n   - Implementation: Configure `_lookupCacheId` to reference cached data maintained independently\n   - Example: Product catalogs, customer databases, pricing information\n\n**Property usage**\n\nThere are two mutually exclusive ways to configure lookups, depending on which mechanism you choose:\n\n1. **For Static Mappings**: Configure the `map` property with a direct key-value object\n   ```json\n   \"map\": {\"US\": \"United States\", \"CA\": \"Canada\"}\n   ```\n\n2. **For Dynamic Lookups**: Configure the following properties:\n   - `_lookupCacheId`: Reference to the lookup cache resource\n   - `extract`: JSON path to extract specific value from the returned lookup object\n\n**When to use**\n\nLookups are ideal for:\n\n1. **Value Translation**: Mapping codes or IDs to human-readable values\n\n2. **Data Enrichment**: Adding related information to records during processing\n\n3. **Normalization**: Ensuring consistent formatting of values across systems\n\n**Implementation details**\n\nLookups can be referenced in:\n\n1. **Field Mappings**: Direct use in field transformation configurations\n\n2. **Handlebars Templates**: Use within templates with the syntax:\n   ```\n   {{lookup 'lookupName' record.fieldName}}\n   ```\n\n**Example usage**\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"countryCodeToName\",\n    \"map\": {\n      \"US\": \"United States\",\n      \"CA\": \"Canada\",\n      \"UK\": \"United Kingdom\"\n    },\n    \"default\": \"Unknown Country\",\n    \"allowFailures\": true\n  },\n  {\n    \"name\": \"productDetails\",\n    \"_lookupCacheId\": \"60a2c4e6f321d800129a1a3c\",\n    \"extract\": \"$.details.price\",\n    \"allowFailures\": false\n  }\n]\n```\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique identifier for the lookup table within this configuration.\n\nThis name must be unique within the scope where the lookup is defined and is used to reference\nthe lookup in handlebars templates with the syntax {{lookup 'name' value}}.\n\nChoose descriptive names that indicate the transformation purpose, such as:\n- \"countryCodeToName\" for country code to full name conversion\n- \"statusMapping\" for status code translations\n- \"departmentCodes\" for department code to name mapping\n"},"map":{"type":"object","description":"The lookup mapping table as key-value pairs.\n\nThis object contains the input values as keys and their corresponding\noutput values. When a input value matches a key in this object,\nit will be replaced with the corresponding value.\n\nThe map should be kept to a reasonable size (typically under 100 entries)\nfor optimal performance. For larger mapping requirements, consider using\ndynamic lookups instead.\n\nMaps can include:\n- Simple code to name conversions: {\"US\": \"United States\"}\n- Status transformations: {\"A\": \"Active\", \"I\": \"Inactive\"}\n- ID to name mappings: {\"100\": \"Marketing\", \"200\": \"Sales\"}\n\nValues can be strings, numbers, or booleans, but all are stored as strings\nin the configuration.\n"},"_lookupCacheId":{"type":"string","description":"Reference to a LookupCache resource that contains the reference data for the lookup.\n\n**Purpose**\n\nThis field connects the lookup to an external data source that has been cached in the system.\nUnlike static lookups that use the `map` property, dynamic lookups can reference large datasets\nor frequently changing information without requiring constant updates to the integration.\n\n**Implementation details**\n\nThe LookupCache resource referenced by this ID contains:\n- The data records to be used as a reference source\n- Configuration for how the data should be indexed and accessed\n- Caching parameters to balance performance with data freshness\n\n**Usage patterns**\n\nCommonly used to reference:\n- Product catalogs or SKU databases\n- Customer or account information\n- Pricing tables or discount rules\n- Complex business logic lookup tables\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n","format":"objectid"},"extract":{"type":"string","description":"JSON path expression that extracts a specific value from the cached lookup object.\n\n**Purpose**\n\nWhen using dynamic lookups with a LookupCache, this JSON path identifies which field to extract\nfrom the cached object after it has been retrieved using the lookup key.\n\n**Implementation details**\n\n- Must use JSON path syntax (similar to mapping extract fields)\n- Operates on the cached object returned by the lookup operation\n- Examples:\n  - \"$.name\" - Extract the name field from the top level\n  - \"$.details.price\" - Extract a nested price field\n  - \"$.attributes[0].value\" - Extract a value from the first element of an array\n\n**Usage scenario**\n\nWhen a lookup cache contains complex objects:\n```json\n// Cache entry for key \"PROD-123\":\n{\n  \"id\": \"PROD-123\",\n  \"name\": \"Premium Widget\",\n  \"details\": {\n    \"price\": 99.99,\n    \"currency\": \"USD\",\n    \"inStock\": true\n  }\n}\n```\n\nSetting extract to \"$.details.price\" would return 99.99 as the lookup result.\n\nIf no extract is provided, the entire cached object is returned as the lookup result.\n"},"default":{"type":"string","description":"Default value to use when the source value is not found in the lookup map.\n\nThis value is used as a fallback when:\n1. The source value doesn't match any key in the map\n2. allowFailures is set to true\n\nSetting an appropriate default helps prevent flow failures due to unexpected\nvalues and provides predictable behavior for edge cases.\n\nCommon default patterns include:\n- Descriptive unknowns: \"Unknown Country\", \"Unspecified Status\"\n- Original value indicators: \"{Original Value}\", \"No mapping found\"\n- Neutral values: \"Other\", \"N/A\", \"Miscellaneous\"\n\nIf allowFailures is false and no default is specified, the flow will fail\nwhen encountering unmapped values.\n"},"allowFailures":{"type":"boolean","description":"When true, missing lookup values will use the default value rather than causing an error.\n\n**Behavior control**\n\nThis field determines how the system handles source values that don't exist in the map:\n\n- true: Use the default value for missing mappings and continue processing\n- false: Treat missing mappings as errors, failing the record\n\n**Recommendation**\n\nSet this to true when:\n- New source values might appear over time\n- Data quality issues could introduce unexpected values\n- Processing should continue even with imperfect mapping\n\nSet this to false when:\n- Complete data accuracy is critical\n- All possible source values are known and controlled\n- Missing mappings indicate serious data problems that should be addressed\n\nThe best practice is typically to set allowFailures to true with a meaningful\ndefault value, so flows remain operational while alerting you to missing mappings.\n"}}}},"Form":{"type":"object","description":"Configuration for creating user-friendly settings forms that make it easier for less technical users\nto configure integration resources.\n\n**Settings form builder**\n\nThe Settings Form Builder allows you to create or edit user-friendly fields that prompt for text entry\nor selections that will be returned as settings applied to this resource. Your forms can include any\nfield types that you see elsewhere in integrator.io, such as:\n\n- Text fields\n- Dropdown selections\n- Checkboxes\n- Radio buttons\n- Date pickers\n- Multi-select fields\n- Search fields\n\nForm fields make it much easier for less technical users to work with your integration settings by:\n\n- Providing clear labels and help text\n- Enforcing validation rules\n- Offering pre-defined selection options\n- Grouping related settings logically\n- Supporting conditional visibility\n- Creating a consistent user experience\n","properties":{"form":{"type":"object","description":"Configuration that defines the structure, fields, and behavior of the settings form.\n\nThis object contains the complete definition of the form's layout, fields, validation rules,\nand interactive behaviors. The specific structure depends on the form complexity and can include\nfield definitions, sections, conditional display logic, and default values.\n\nThe form configuration is typically created and managed through the visual Form Builder interface\nrather than edited directly as JSON.\n","properties":{"fieldMap":{"type":"object","description":"A mapping of field identifiers to their configuration objects.\nEach key in this object represents a unique field ID, and the value contains\nall the configuration settings for that specific form field.\n","additionalProperties":{"type":"object","description":"Configuration for an individual form field.\n","properties":{"id":{"type":"string","description":"Unique identifier for this field within the form.\nThis value typically matches the key in the fieldMap object.\n"},"name":{"type":"string","description":"Name of the field, used as the property name when generating the settings object\nfrom the submitted form data.\n"},"type":{"type":"string","description":"The type of form control to render for this field.\n","enum":["text","checkbox","radiogroup","relativeuri","editor","keyvalue","select","multiselect","toggle","datetime","date"]},"label":{"type":"string","description":"Display label shown next to the field in the form.\n"},"description":{"type":"string","description":"Detailed explanation text that appears below the field, providing more context\nthan the label or helpText.\n"},"helpText":{"type":"string","description":"Explanatory text that appears when hovering over the help icon next to the field.\nUsed to provide additional guidance on how to use the field.\n"},"required":{"type":"boolean","description":"When true, the field must have a value before the form can be submitted.\n","default":false},"multiline":{"type":"boolean","description":"For text fields, determines whether the input should be a multi-line text area\ninstead of a single-line input.\n","default":false},"rowsMax":{"type":"integer","description":"For multiline text fields, specifies the maximum number of visible rows.\n"},"inputType":{"type":"string","description":"For text fields, specifies the HTML input type attribute to apply additional\nvalidation or specialized input behavior.\n","enum":["text","number","email","password","tel","url"]},"delimiter":{"type":"string","description":"For text fields, specifies a character to use for splitting the input into an array.\nUsed for collecting multiple values in a single text field.\n"},"mode":{"type":"string","description":"For editor fields, specifies the type of content being edited for syntax highlighting.\n","enum":["json","xml","csv","text"]},"keyName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the key input.\n"},"valueName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the value input.\n"},"showDelete":{"type":"boolean","description":"For keyvalue fields, determines whether to show a delete button for each key-value pair.\n"},"doNotAllowFutureDates":{"type":"boolean","description":"For date and datetime fields, restricts selection to dates not in the future.\n"},"skipTimezoneConversion":{"type":"boolean","description":"For datetime fields, prevents automatic timezone conversion of the date value.\n"},"options":{"type":"array","description":"For fields that present choices (select, multiselect, radiogroup, toggle), defines\nthe available options.\n","items":{"oneOf":[{"title":"Option group","type":"object","properties":{"items":{"type":"array","items":{"oneOf":[{"title":"String value","type":"string"},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]},"description":"Array of option values/labels to display in the selection control.\n"}}},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]}},"visibleWhen":{"type":"array","description":"Conditional display rules that determine when this field should be visible.\nIf empty or not provided, the field is always visible.\n","items":{"type":"object","properties":{"field":{"type":"string","description":"The ID of another field whose value controls the visibility of this field.\n"},"is":{"type":"array","items":{"type":"string"},"description":"Array of values - if the referenced field has any of these values,\nthis field will be visible.\n"}}}}}}},"layout":{"type":"object","description":"Defines how the form fields are arranged and grouped in the UI.\nThe layout can organize fields into columns, sections, or other visual groupings.\n","properties":{"type":{"type":"string","description":"The type of layout to use for the form.\n","enum":["column"]},"containers":{"type":"array","description":"Array of container objects that group fields or contain nested containers.\nEach container can represent a column, box, indented section, or collapsible section.\n","items":{"type":"object","properties":{"type":{"type":"string","description":"The visual style of the container.\n","enum":["indent","box","collapse"]},"label":{"type":"string","description":"The heading text displayed for this container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this container.\nEach ID must correspond to a key in the fieldMap object.\n"},"containers":{"type":"array","description":"Nested containers within this container. Allows for hierarchical organization\nof fields with different visual styles.\n","items":{"type":"object","properties":{"label":{"type":"string","description":"The heading text displayed for this nested container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this nested container.\n"}}}}}}}}}},"additionalProperties":true},"init":{"type":"object","description":"Configuration for custom JavaScript initialization that executes when the form is first loaded.\n\nThis object defines a JavaScript hook that prepares the form for use, sets initial field values,\nperforms validation, or otherwise customizes the form behavior before it is displayed to the user.\n\n**Function signature**\n\nThe initialization function is invoked with a single 'options' argument containing contextual information:\n```javascript\nfunction formInit(options) {\n  // Process options and return the form object\n  return options.resource.settingsForm.form;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.resource` - The current resource being configured\n- `options.parentResource` - The parent of the current resource\n- `options.grandparentResource` - The grandparent of the current resource\n- `options.license` - For integration apps, the license provisioned to the integration\n- `options.parentLicense` - For integration apps, the parent of the license\n\n\n**Common uses**\n\n- Dynamically generate field options based on resource configuration\n- Pre-populate default values from related resources\n- Apply conditional logic that depends on resource properties\n- Add, remove, or modify form fields based on user permissions or account settings\n- Fetch external data to populate selection options\n- Implement complex validation rules that depend on resource context\n- Create branching form experiences based on user selections\n\n**Return value**\n\nThe function must return a valid form object that the UI can render.\nThrowing an exception will signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called when the form\nis initialized and should handle any custom setup logic.\n\nThe function must follow the expected signature and return a valid form object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the initialization function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}}}},"Settings":{"type":"object","description":"Configuration settings that can be accessed by hooks, filters, mappings and handlebars templates at runtime.\n\nIt enables customization of the resource's logic, allowing hooks, mappings, filters, and\nhandlebars to access and apply the settings at runtime.\n\n**Usage**\n\nThe settings object can store arbitrary JSON data that you want to save with the resource.\nWhile it's often populated through a form defined in the `settingsForm` field, you can also:\n\n- Directly provide JSON settings without using a form\n- Store configuration values used by hooks and templates\n- Create resource-specific constants and parameters\n- Maintain lookup tables or mapping structures\n- Define conditional logic parameters\n\n**Accessibility**\n\nSettings are available in:\n- All handlebars fields for building dynamic payloads\n- Field mapping expressions\n- JavaScript hooks via the options object\n- Filters and transformations\n\n**Best practices**\n\nFor non-technical users, create a custom form instead of editing the JSON directly.\nThis provides a user-friendly interface for updating settings without requiring JSON knowledge.\n","additionalProperties":true},"MockOutput":{"type":"object","description":"Sample data that simulates the output from an export for testing and configuration purposes.\n\nMock output allows you to configure and test flows without executing the actual export or\nwaiting for real-time data to arrive. This is particularly useful for:\n- Initial flow configuration and testing\n- Mapping development without requiring live data\n- Generating metadata for downstream flow steps\n- Creating realistic test scenarios\n- Documenting expected data structures\n\n**Structure**\n\nThe mock output must follow the integrator.io canonical format, which consists of a\n`page_of_records` array containing record objects. Each record object has a `record`\nproperty that contains the actual data fields.\n\n```json\n{\n  \"page_of_records\": [\n    {\n      \"record\": {\n        \"field1\": \"value1\",\n        \"field2\": \"value2\",\n        ...\n      }\n    },\n    ...\n  ]\n}\n```\n\n**Usage**\n\nWhen executing a test run or configuring a flow, integrator.io will use this mock output\ninstead of executing the export to retrieve live data. This allows you to:\n- Test mappings with representative data\n- Configure downstream flow steps without waiting for real data\n- Simulate various data scenarios\n\n**Limitations**\n\n- Maximum of 10 records\n- Maximum size of 1 MB\n- Must follow the canonical format shown above\n\nMock output can be populated automatically from preview data or entered manually.\n","properties":{"page_of_records":{"type":"array","description":"Array of record objects in the integrator.io canonical format.\n\nEach item in this array represents one record that would be processed\nby the flow during execution.\n","items":{"type":"object","properties":{"record":{"type":"object","description":"Container for the actual record data fields.\n\nThe structure of this object will vary depending on the specific\nexport configuration and the source system's data structure.\n","additionalProperties":true}}}}}},"PreSave":{"type":"object","description":"Defines a JavaScript hook that executes before the resource is saved.\n\nThis hook allows for programmatic validation, transformation, or enrichment of the\nresource itself before it is persisted. It can be used to enforce business rules,\nset derived properties, or implement cross-field validations that can't be expressed\nthrough the standard UI.\n\n**Function signature**\n\nThe preSave function is invoked with a single 'options' argument containing:\n```javascript\nfunction preSave(options) {\n  // Process options and return the modified resource\n  return options.newResource;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.newResource` - The resource being saved (with pending changes)\n- `options.oldResource` - The previous version of the resource (before changes)\n\n\n**Common uses**\n\n- Enforcing complex business rules across multiple fields\n- Automatically deriving field values based on other configuration\n- Performing validation that depends on external systems or data\n- Normalizing or standardizing configuration values\n- Adding computed or derived properties\n- Implementing versioning or change tracking\n- Dynamically looking up data using the Celigo API module to enrich configuration\n\n**Return value**\n\nThe function must return the newResource object (potentially modified) to be saved.\nThrowing an exception will prevent saving and signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called just before\nthe resource is saved.\n\nThe function must follow the expected signature and return the resource object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the preSave function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}},"ResourceResponse":{"type":"object","description":"Core response fields shared by all Celigo resources","properties":{"_id":{"type":"string","format":"objectId","readOnly":true,"description":"Unique identifier for the resource.\n\nThe _id is used in:\n- API endpoints that operate on a specific resource (e.g., GET, PUT, DELETE)\n- References from other resources (e.g., flows that use this resource)\n- Job history and error tracking\n\nFormat: 24-character hexadecimal string\n"},"createdAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was initially created.\n\nThis read-only field is automatically set during resource creation and cannot\nbe modified. It provides an audit trail for when the resource was first added\nto the system, which can be useful for:\n\n- Resource lifecycle management\n- Audit and compliance reporting\n- Troubleshooting integration timelines\n- Identifying older resources that may need review\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"},"lastModified":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was most recently updated.\n\nThis read-only field is automatically updated whenever any property of the\nresource is modified. It provides an audit trail that can be used for:\n\n- Determining if a resource has changed since it was last reviewed\n- Monitoring configuration changes during troubleshooting\n- Implementing cache invalidation strategies\n- Synchronizing related resources based on modification time\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix)\nand will always be equal to or later than the createdAt timestamp.\n"},"deletedAt":{"type":["string","null"],"format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was marked for deletion.\n\nWhen this field is present and contains a valid timestamp, it indicates\nthat the resource has been soft-deleted (moved to the recycle bin) but not\nyet permanently removed from the system. This allows for recovery of\naccidentally deleted resources within a specified retention period.\n\nThe deletedAt timestamp enables:\n- Filtering deleted resources from active resource listings\n- Implementing time-based retention policies for permanent deletion\n- Tracking deletion events for audit and compliance purposes\n- Resource recovery workflows with clear timeframes\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\nWhen null or absent, the resource is considered active.\n"}},"required":["_id"]},"IAResourceResponse":{"type":"object","description":"Integration app response fields for resources that are part of integration apps","properties":{"_integrationId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the specific integration instance that contains this resource.\n\nThis field is only populated for resources that are part of an integration app\ninstallation. It contains the unique identifier (_id) of the integration\nresource that was installed in the account.\n\nThe integration instance represents a specific installed instance of an\nintegration app, with its own configuration, settings, and runtime environment.\n\nThis reference enables:\n- Tracing the resource back to its parent integration instance\n- Permission and access control based on integration ownership\n- Lifecycle management (enabling/disabling, updating, or uninstalling)\n"},"_connectorId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the integration app that defines this resource.\n\nThis field is only populated for resources that are part of an integration app.\nIt contains the unique identifier (_id) of the integration app (connector)\nthat defines the structure, behavior, and templates for this resource.\n\nThe integration app is the published template that can be installed\nmultiple times across different accounts, with each installation creating\na separate integration instance (referenced by _integrationId).\n\nThis reference enables:\n- Identifying the source integration app for this resource\n- Determining which template version is being used\n- Linking to documentation, support, and marketplace information\n"}}},"AIDescription":{"type":"object","description":"AI-generated descriptions and documentation for the resource.\n\nThis object contains automatically generated content that helps users\nunderstand the purpose, behavior, and configuration of the resource without\nrequiring them to analyze the technical details. The AI-generated content\nis sanitized and safe for display in the UI.\n","properties":{"summary":{"type":"string","description":"Brief AI-generated summary of the resource's purpose and functionality.\n\nThis concise description provides a quick overview of what the resource does,\nwhat systems it interacts with, and its primary role in the integration.\nThe summary is suitable for display in list views, dashboards, and other\ncontexts where space is limited.\n\nMaximum length: 10KB\n"},"detailed":{"type":"string","description":"Comprehensive AI-generated description of the resource's functionality.\n\nThis detailed explanation covers the resource's purpose, configuration details,\ndata flow patterns, filtering logic, and other technical aspects. It provides\nin-depth information suitable for documentation, tooltips, or detailed views\nin the administration interface.\n\nThe content may include HTML formatting for improved readability.\n\nMaximum length: 10KB\n"},"generatedOn":{"type":"string","format":"date-time","description":"Timestamp indicating when the AI description was generated.\n\nThis field helps track the freshness of the AI-generated content and\ndetermine when it might need to be regenerated due to changes in the\nresource's configuration or behavior.\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"}}},"APIM":{"type":"array","description":"Read-only field that stores information about the integration resources\npublished in the API Management (APIM) platform.\n\nThis field tracks the relationship between integrator.io resources and their\npublished counterparts in the Gravitee API Management platform, which is\ntightly integrated with the Celigo UI. When resources are \"pushed\" to Gravitee,\nthis field is populated with the relevant identifiers and statuses.\n","items":{"type":"object","properties":{"apiId":{"type":"string","description":"Identifier for the API where this integrator.io resource is published in the APIM.\n\nThis is a Gravitee resource identifier (not prefixed with underscore like Celigo IDs)\nthat uniquely identifies the API in the API Management platform.\n"},"flowId":{"type":"string","description":"Identifier for the flow within the API where this integrator.io resource is linked.\n\nWhen an API has multiple integrator.io resources linked, each resource is associated\nwith a specific flow in the API, identified by this field. This is a Gravitee\nresource identifier.\n"},"status":{"type":"string","description":"Indicates the publishing stage of the integrator.io resource in APIM.\n\nPossible values:\n- 'oaspending': The resource is published but the OpenAPI Specification (OAS) is not\n  yet published. The apiId will be updated with the API ID created in APIM.\n- 'published': The OpenAPI Specification for the integrator.io resource has been\n  successfully uploaded to APIM.\n","enum":["oaspending","published"]}}}},"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"404-not-found":{"description":"Not found. The requested resource does not exist or is not visible to the caller.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}}},"paths":{"/v1/exports/{_id}/clone":{"post":{"summary":"Clone an export","description":"Creates a copy of an existing export.\nSupports optionally remapping referenced connections (via connectionMap).\n","operationId":"cloneExport","tags":["Exports"],"parameters":[{"name":"_id","in":"path","description":"The unique identifier of the export to clone","required":true,"schema":{"type":"string","format":"objectId"}}],"requestBody":{"required":false,"content":{"application/json":{"schema":{"$ref":"#/components/schemas/CloneRequest"}}}},"responses":{"200":{"description":"Export cloned successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/CloneResponse"}}}},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"$ref":"#/components/responses/404-not-found"}}}}}}
````

## Replace connection on export present in a flow

> Replaces the connection used by an export in a flow and cancels any running jobs.\
> This is useful when migrating flows between environments or updating to newer connection versions.<br>

```json
{"openapi":"3.1.0","info":{"title":"Exports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"responses":{"400-bad-request":{"description":"Bad request. The server could not understand the request because of malformed syntax or invalid parameters.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}},"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"404-not-found":{"description":"Not found. The requested resource does not exist or is not visible to the caller.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}},"schemas":{"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}}},"paths":{"/v1/exports/{_id}/replaceConnection":{"put":{"summary":"Replace connection on export present in a flow","description":"Replaces the connection used by an export in a flow and cancels any running jobs.\nThis is useful when migrating flows between environments or updating to newer connection versions.\n","operationId":"replaceConnectionOnExport","tags":["Exports"],"parameters":[{"name":"_id","in":"path","description":"The unique identifier of the export","required":true,"schema":{"type":"string","format":"objectId"}}],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"_newConnectionId":{"type":"string","description":"The id of the new connection to be used"}},"required":["_newConnectionId"]}}}},"responses":{"204":{"description":"Successfully replaced connection on export"},"400":{"$ref":"#/components/responses/400-bad-request"},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"$ref":"#/components/responses/404-not-found"}}}}}}
```

## Invoke an export and return its data

> Runs an existing export end-to-end and returns the fetched data (or errors)\
> synchronously. Unlike \`POST /v1/flows/{\_id}/run\`, which starts a full flow\
> job, this endpoint invokes a \*\*single export\*\* in isolation and returns the\
> raw result directly in the response body.\
> \
> The request body is optional — pass \`{}\` or omit the body entirely for\
> exports that require no input. Some adaptor types accept a \`data\` array in\
> the body to supply input records.\
> \
> On success, the response contains the export's fetched data. On\
> application-level failure (e.g. the source system is unreachable), the\
> endpoint still returns \*\*200\*\* with the errors in an \`errors\` array — 4xx\
> is reserved for request-level validation (bad ID, missing auth).\
> \
> AI guidance:\
> \- This endpoint \*\*actually executes\*\* the export against the live source\
> &#x20; system — use \`POST /v1/exports/preview\` for dry-run testing.\
> \- POST-only; GET on this path returns 404.\
> \- The 404 error shape for an invalid export ID is\
> &#x20; \`{"errors": {"code": "invalid\_ref", "message": "Export not found."}}\` —\
> &#x20; note the non-standard singular \`errors\` object (not an array).

```json
{"openapi":"3.1.0","info":{"title":"Exports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}}}},"paths":{"/v1/exports/{_id}/invoke":{"post":{"operationId":"invokeExport","tags":["Exports"],"summary":"Invoke an export and return its data","description":"Runs an existing export end-to-end and returns the fetched data (or errors)\nsynchronously. Unlike `POST /v1/flows/{_id}/run`, which starts a full flow\njob, this endpoint invokes a **single export** in isolation and returns the\nraw result directly in the response body.\n\nThe request body is optional — pass `{}` or omit the body entirely for\nexports that require no input. Some adaptor types accept a `data` array in\nthe body to supply input records.\n\nOn success, the response contains the export's fetched data. On\napplication-level failure (e.g. the source system is unreachable), the\nendpoint still returns **200** with the errors in an `errors` array — 4xx\nis reserved for request-level validation (bad ID, missing auth).\n\nAI guidance:\n- This endpoint **actually executes** the export against the live source\n  system — use `POST /v1/exports/preview` for dry-run testing.\n- POST-only; GET on this path returns 404.\n- The 404 error shape for an invalid export ID is\n  `{\"errors\": {\"code\": \"invalid_ref\", \"message\": \"Export not found.\"}}` —\n  note the non-standard singular `errors` object (not an array).","parameters":[{"in":"path","name":"_id","required":true,"schema":{"type":"string","format":"objectId"},"description":"Export ID"}],"requestBody":{"required":false,"content":{"application/json":{"schema":{"type":"object","description":"Optional input payload. Most exports ignore the body; some accept\na `data` array of records to feed into the export pipeline.","properties":{"data":{"type":"array","description":"Input records for the export (adaptor-dependent)","items":{"type":"object","additionalProperties":true}}},"additionalProperties":true}}}},"responses":{"200":{"description":"Export completed. The response contains either the fetched data or an\n`errors` array if the export encountered application-level failures\n(connection timeout, file not found, etc.). Always inspect for errors\neven on 200.","content":{"application/json":{"schema":{"type":"object","additionalProperties":true}}}},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"description":"Export not found","content":{"application/json":{"schema":{"type":"object","properties":{"errors":{"type":"object","properties":{"code":{"type":"string"},"message":{"type":"string"}}}}}}}},"500":{"description":"Server error — can occur when the export is misconfigured (e.g. a\nSimpleExport with no connection)."}}}}}}
```

## List dependencies of an export

> Returns the set of resources that depend on the specified resource.\
> The response is an object whose keys are dependent-resource types\
> (e.g. \`flows\`, \`imports\`) and whose values are arrays of dependency\
> entries.\
> \
> AI guidance:\
> \- An empty object \`{}\` means no other resources depend on the target.\
> &#x20; This is also returned for a well-formatted but nonexistent id.

```json
{"openapi":"3.1.0","info":{"title":"Exports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"DependencyResponse":{"type":"object","description":"Map of dependent-resource types to arrays of dependency entries.\nKeys are plural resource type strings (e.g. `flows`, `imports`,\n`connections`). An empty object `{}` means no dependents.\n","additionalProperties":{"type":"array","items":{"$ref":"#/components/schemas/DependencyEntry"}}},"DependencyEntry":{"type":"object","description":"A single resource that depends on the queried resource.","properties":{"id":{"type":"string","description":"Unique identifier of the dependent resource."},"name":{"type":"string","description":"Display name of the dependent resource."},"paths":{"type":"array","description":"JSON-path-style pointers within the dependent resource's document\nthat reference the target resource.\n","items":{"type":"string"}},"accessLevel":{"type":"string","description":"The caller's access level on the dependent resource."},"dependencyIds":{"type":"object","description":"Map of resource types to arrays of ids that this dependent\nresource references on the target. Keys are singular or plural\nresource type strings; values are arrays of id strings.\n","additionalProperties":{"type":"array","items":{"type":"string"}}}},"required":["id","name","paths","accessLevel","dependencyIds"]}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}}}},"paths":{"/v1/exports/{_id}/dependencies":{"get":{"operationId":"listExportDependencies","tags":["Exports"],"summary":"List dependencies of an export","description":"Returns the set of resources that depend on the specified resource.\nThe response is an object whose keys are dependent-resource types\n(e.g. `flows`, `imports`) and whose values are arrays of dependency\nentries.\n\nAI guidance:\n- An empty object `{}` means no other resources depend on the target.\n  This is also returned for a well-formatted but nonexistent id.","parameters":[{"name":"_id","in":"path","required":true,"description":"Resource ID.","schema":{"type":"string","format":"objectId"}}],"responses":{"200":{"description":"Dependency map. Keys are resource-type strings; values are arrays\nof dependency entries. Returns `{}` when no dependents exist.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/DependencyResponse"}}}},"401":{"$ref":"#/components/responses/401-unauthorized"}}}}}}
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://developer.celigo.com/api/api-reference/exports.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
