# Tools

## List tools

> Returns a list of all tools configured in the account.\
> If no tools exist in the account, a 204 response with no body will be returned.\
> \
> Results can be filtered by integration using the \`\_integrationId\` query parameter.<br>

````json
{"openapi":"3.1.0","info":{"title":"Tools","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"parameters":{"Include":{"name":"include","in":"query","required":false,"description":"Comma-separated list of fields to project into each returned record.\nTriggers **summary projection** on supported list endpoints: the server\nreturns a minimal identity set for each record (`_id`, `name`, plus a\nresource-specific always-on set like `adaptorType` on exports/imports,\nor richer defaults on `ashares`, `audit`, `httpconnectors`, `transfers`,\netc.) and adds any listed fields that exist on the record. Listed fields\nthe record doesn't carry are silently dropped.\n\nDot notation is supported for projecting nested sub-fields — e.g.\n`include=ftp.directoryPath` on `/v1/exports` returns just that nested\nfield inside `ftp` for FTP-type exports (and omits `ftp` entirely for\nnon-FTP exports).\n\nRules:\n- Value regex is `{a-z A-Z . _}` (letters, dots, underscores) plus the\n  comma separator; digits are also accepted in practice. Any other\n  character returns **400 `invalid_query_params`**.\n- Empty value (`include=`) or bare `include` is ignored — the full\n  default record is returned.\n- `include` and `exclude` are **mutually exclusive**. Passing both\n  returns **400 `invalid_query_params`**: *\"Please provide either\n  include or exclude param in the request query and not both.\"*\n- Array-bracket syntax (`include[]=...`) is not supported and can return\n  a 500.\n- Only list endpoints honor projection — on GET-by-id the parameter is\n  silently ignored.\n- A small set of list endpoints explicitly reject both `include` and\n  `exclude` with **400 `invalid_query_params`** and a message of the form\n  *\"Include or exclude query params are not applicable for `<resource>`\n  resource.\"* Known rejections: `/v1/ediprofiles`, `/v1/environments`,\n  `/v1/iClients`, `/v1/lookupcaches`, `/v1/tags`.","schema":{"type":"string"}},"Exclude":{"name":"exclude","in":"query","required":false,"description":"Comma-separated list of fields to remove from the default response on\nsupported list endpoints. Unlike `include`, `exclude` does NOT trigger\nsummary projection — callers get the standard full-record shape with the\nnamed fields stripped out.\n\nRules:\n- Value regex is `{a-z A-Z . _}` (letters, dots, underscores) plus the\n  comma separator; digits are also accepted in practice. Any other\n  character returns **400 `invalid_query_params`**.\n- Empty value (`exclude=`) is ignored.\n- Certain protected identity fields **cannot be stripped** — e.g.\n  `exclude=name` on `/v1/exports` is silently ignored and `name` remains\n  in the response. Protected sets vary per resource.\n- `include` and `exclude` are **mutually exclusive**. Passing both\n  returns **400 `invalid_query_params`**: *\"Please provide either\n  include or exclude param in the request query and not both.\"*\n- Only list endpoints honor stripping — on GET-by-id the parameter is\n  silently ignored.\n- A small set of list endpoints explicitly reject both `include` and\n  `exclude` with **400 `invalid_query_params`** and a message of the form\n  *\"Include or exclude query params are not applicable for `<resource>`\n  resource.\"* Known rejections: `/v1/ediprofiles`, `/v1/environments`,\n  `/v1/iClients`, `/v1/lookupcaches`, `/v1/tags`.","schema":{"type":"string"}}},"schemas":{"Response":{"type":"object","description":"Response schema for tool operations.\n\nContains the complete tool configuration including metadata, input/output\nsettings, routing logic, and AI-generated descriptions.\n","properties":{"_id":{"type":"string","format":"objectId","readOnly":true,"description":"Unique identifier for the tool"},"name":{"type":"string","description":"Human-readable name for the tool"},"description":{"type":"string","description":"Detailed description of the tool's purpose"},"_integrationId":{"type":"string","format":"objectId","description":"Reference to the parent integration"},"input":{"$ref":"#/components/schemas/Input"},"output":{"$ref":"#/components/schemas/Output"},"routers":{"type":"array","description":"Routing configuration for conditional processing.\n\nOnly present when the tool has routers configured.\n","items":{"$ref":"#/components/schemas/Router"}},"aiDescription":{"$ref":"#/components/schemas/AIDescription"},"createdAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the tool was created"},"lastModified":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the tool was last modified"},"deletedAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the tool was soft-deleted.\n\nOnly present for deleted tools. The tool will be permanently\nremoved 30 days after this timestamp.\n"}}},"Input":{"type":"object","description":"Configuration for the tool's input processing.\n\nDefines the expected input structure, optional transformations to apply\nbefore routing, and mock data for testing.\n","properties":{"name":{"type":"string","maxLength":200,"description":"Display name for the input configuration.\n"},"description":{"type":"string","maxLength":10240,"description":"Description of the expected input data and its purpose.\n"},"schema":{"type":"object","description":"JSON Schema describing the expected input data structure.\n\nUsed for validation, documentation, and AI-assisted tooling.\nMust be a valid JSON Schema document.\n","additionalProperties":true},"transform":{"$ref":"#/components/schemas/Transform"},"mockInput":{"type":"object","description":"Mock data for testing the tool's input processing.\n\nProvides sample input to test transformation logic and routing\nwithout requiring live data. Maximum size: 1MB.\n","additionalProperties":true}}},"Transform":{"type":"object","description":"Configuration for transforming data during processing operations. This object enables\nreshaping of records.\n\n**Transformation capabilities**\n\nCeligo's transformation engine offers powerful features for data manipulation:\n- Precise field mapping with JSONPath expressions\n- Support for any level of nested arrays\n- Formula-based field value generation\n- Dynamic references to flow and integration settings\n\n**Implementation approaches**\n\nThere are two distinct transformation mechanisms available:\n\n**Rule-Based Transformation (`type: \"expression\"`)**\n- **Best For**: Most transformation scenarios from simple to complex\n- **Capabilities**: Field mapping, formula calculations, lookups, nested data handling\n- **Advantages**: Visual configuration, no coding required, intuitive interface\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear mapping requirements or need to reshape data structure\n\n**Script-Based Transformation (`type: \"script\"`)**\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Capabilities**: Full programmatic control, custom processing, complex business rules\n- **Advantages**: Maximum flexibility, can implement any transformation logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Visual transformation tools aren't sufficient for your use case\n","properties":{"type":{"type":"string","description":"Determines which transformation mechanism to use. This choice affects which properties\nmust be configured and how transformation logic is implemented.\n\n**Available types**\n\n**Rule-Based Transformation (`\"expression\"`)**\n- **Required Config**: The `expression` object with mapping definitions\n- **Behavior**: Applies declarative rules to reshape data\n- **Best For**: Most transformation scenarios from simple to complex\n- **Advantages**: Visual configuration, no coding required\n\n**Script-Based Transformation (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to transform data\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard data transformations, use `\"expression\"`\n2. For complex logic or specialized processing, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based transformations. This object enables reshaping data\nwithout requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define transformation rules that\ncan map, modify, and generate data elements.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Field mappings define how input data is transformed to target fields\n- Formulas can be used to calculate or generate new values\n- Lookups can enrich data by fetching related information\n- Mode determines how records are processed (create new or modify existing)\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"2\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"2\"\nfor current implementations.\n","enum":["2"]},"rulesTwoDotZero":{"type":"object","description":"Configuration for version 2 transformation rules. This object contains the core logic\nfor how data is mapped, enriched, and transformed.\n\n**Capabilities**\n\nTransformation 2.0 provides:\n- Precise field mapping with JSONPath expressions\n- Support for deeply nested data structures\n- Formula-based field generation\n- Dynamic lookups for data enrichment\n- Multiple operating modes to fit different scenarios\n","properties":{"mode":{"type":"string","description":"Transformation mode that determines how records are handled during processing.\n\n**Available modes**\n\n**Create Mode (`\"create\"`)**\n- **Behavior**: Builds entirely new output records from inputs\n- **Use When**: Output structure differs significantly from input\n- **Advantage**: Clean slate approach, no field inheritance\n\n**Modify Mode (`\"modify\"`)**\n- **Behavior**: Makes targeted edits to existing records\n- **Use When**: Output structure should remain similar to input\n- **Advantage**: Preserves unmapped fields from the original record\n","enum":["create","modify"]},"mappings":{"$ref":"#/components/schemas/Mappings"},"lookups":{"allOf":[{"description":"Shared lookup tables used across all mappings defined in the transformation rules.\n\n**Purpose**\n\nLookups provide centralized value translation that can be referenced from any mapping\nin your transformation configuration. They enable consistent translation of codes, IDs,\nand values between systems without duplicating translation logic.\n\n**Usage in transformations**\n\nLookups are particularly valuable in transformations for:\n\n- **Data Normalization**: Standardizing values from diverse source systems\n- **Code Translation**: Converting between different coding systems (e.g., status codes)\n- **Field Enrichment**: Adding descriptive values based on ID or code lookups\n- **Cross-Reference Resolution**: Mapping identifiers between integrated systems\n\n**Implementation**\n\nLookups are defined once in this array and referenced by name in mappings:\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"statusMapping\",\n    \"map\": {\n      \"A\": \"Active\",\n      \"I\": \"Inactive\",\n      \"P\": \"Pending\"\n    },\n    \"default\": \"Unknown Status\"\n  }\n]\n```\n\nThen referenced in mappings using the lookupName property:\n\n```json\n{\n  \"generate\": \"status\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.statusCode\",\n  \"lookupName\": \"statusMapping\"\n}\n```\n\nThe system automatically applies the lookup during transformation processing.\n\nFor complete details on lookup properties and behavior, see the Lookups schema.\n"},{"$ref":"#/components/schemas/Lookups"}]}}}}},"script":{"type":"object","description":"Configuration for programmable script-based transformations. This object enables complex, custom\ntransformation logic beyond what expression-based transformations can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to transform data according to\nspecialized business rules or complex algorithms.\n\n**Implementation approach**\n\nScript-based transformation works by:\n1. Executing the specified function from the referenced script\n2. Passing input data to the function\n3. Using the function's return value as the transformed output\n\n**Common use cases**\n\nScript transformation is ideal for:\n- Complex business logic that can't be expressed through mappings\n- Algorithmic transformations requiring computation\n- Dynamic transformations based on external factors\n- Legacy system data format compatibility\n- Multi-stage processing with intermediate steps\n\nOnly use script-based transformation when expression-based transformation is insufficient.\nScript transformation requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the transformation logic.\n\nThe referenced script should contain the function specified in the\n'function' property.\n","format":"objectid"},"function":{"type":"string","description":"Name of the function within the script to execute for transformation. This function\nmust exist in the script referenced by _scriptId.\n"}}}}},"Mappings":{"type":"array","description":"Array of field mapping configurations for transforming data from one format into another.\n\n**Guidance**\n\nThis schema is designed around RECURSION as its core architectural principle. Understanding this recursive\nnature is essential for building effective mappings:\n\n1. The schema is self-referential by design - a mapping can contain nested mappings of the same structure\n2. Complex data structures (nested objects, arrays of objects, arrays of arrays of objects) are ALL\n   handled through this recursive pattern\n3. Each mapping handles one level of the data structure; deeper levels are handled by nested mappings\n\nWhen generating mappings programmatically:\n- For simple fields (string, number, boolean): Create single mapping objects\n- For objects: Create a parent mapping with nested 'mappings' array containing child field mappings\n- For arrays: Use 'buildArrayHelper' with extract paths defining array inputs and\n  recursive 'mappings' to define object structures\n\nThe system will process these nested structures recursively during runtime, ensuring proper construction\nof complex hierarchical data while maintaining excellent performance.\n","items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}}},"items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}},"Lookups":{"type":"array","description":"Configuration for value-to-value transformations using lookup tables.\n\n**Purpose**\n\nLookups provide a way to translate values from one system to another. They transform\ninput values into output values using either static mapping tables or\ndynamic lookup caches.\n\n**Lookup mechanisms**\n\nThere are two distinct lookup mechanisms available:\n\n1. **Static Lookups**: Define a simple key-value map object and store it as part of your resource\n   - Best for: Small, fixed sets of values that rarely change\n   - Implementation: Configure the `map` object with input-to-output value mappings\n   - Example: Country codes, status values, simple translations\n\n2. **Dynamic Lookups**: Reference an existing 'Lookup Cache' resource in your Celigo account\n   - Best for: Large datasets, frequently changing values, or complex reference data\n   - Implementation: Configure `_lookupCacheId` to reference cached data maintained independently\n   - Example: Product catalogs, customer databases, pricing information\n\n**Property usage**\n\nThere are two mutually exclusive ways to configure lookups, depending on which mechanism you choose:\n\n1. **For Static Mappings**: Configure the `map` property with a direct key-value object\n   ```json\n   \"map\": {\"US\": \"United States\", \"CA\": \"Canada\"}\n   ```\n\n2. **For Dynamic Lookups**: Configure the following properties:\n   - `_lookupCacheId`: Reference to the lookup cache resource\n   - `extract`: JSON path to extract specific value from the returned lookup object\n\n**When to use**\n\nLookups are ideal for:\n\n1. **Value Translation**: Mapping codes or IDs to human-readable values\n\n2. **Data Enrichment**: Adding related information to records during processing\n\n3. **Normalization**: Ensuring consistent formatting of values across systems\n\n**Implementation details**\n\nLookups can be referenced in:\n\n1. **Field Mappings**: Direct use in field transformation configurations\n\n2. **Handlebars Templates**: Use within templates with the syntax:\n   ```\n   {{lookup 'lookupName' record.fieldName}}\n   ```\n\n**Example usage**\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"countryCodeToName\",\n    \"map\": {\n      \"US\": \"United States\",\n      \"CA\": \"Canada\",\n      \"UK\": \"United Kingdom\"\n    },\n    \"default\": \"Unknown Country\",\n    \"allowFailures\": true\n  },\n  {\n    \"name\": \"productDetails\",\n    \"_lookupCacheId\": \"60a2c4e6f321d800129a1a3c\",\n    \"extract\": \"$.details.price\",\n    \"allowFailures\": false\n  }\n]\n```\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique identifier for the lookup table within this configuration.\n\nThis name must be unique within the scope where the lookup is defined and is used to reference\nthe lookup in handlebars templates with the syntax {{lookup 'name' value}}.\n\nChoose descriptive names that indicate the transformation purpose, such as:\n- \"countryCodeToName\" for country code to full name conversion\n- \"statusMapping\" for status code translations\n- \"departmentCodes\" for department code to name mapping\n"},"map":{"type":"object","description":"The lookup mapping table as key-value pairs.\n\nThis object contains the input values as keys and their corresponding\noutput values. When a input value matches a key in this object,\nit will be replaced with the corresponding value.\n\nThe map should be kept to a reasonable size (typically under 100 entries)\nfor optimal performance. For larger mapping requirements, consider using\ndynamic lookups instead.\n\nMaps can include:\n- Simple code to name conversions: {\"US\": \"United States\"}\n- Status transformations: {\"A\": \"Active\", \"I\": \"Inactive\"}\n- ID to name mappings: {\"100\": \"Marketing\", \"200\": \"Sales\"}\n\nValues can be strings, numbers, or booleans, but all are stored as strings\nin the configuration.\n"},"_lookupCacheId":{"type":"string","description":"Reference to a LookupCache resource that contains the reference data for the lookup.\n\n**Purpose**\n\nThis field connects the lookup to an external data source that has been cached in the system.\nUnlike static lookups that use the `map` property, dynamic lookups can reference large datasets\nor frequently changing information without requiring constant updates to the integration.\n\n**Implementation details**\n\nThe LookupCache resource referenced by this ID contains:\n- The data records to be used as a reference source\n- Configuration for how the data should be indexed and accessed\n- Caching parameters to balance performance with data freshness\n\n**Usage patterns**\n\nCommonly used to reference:\n- Product catalogs or SKU databases\n- Customer or account information\n- Pricing tables or discount rules\n- Complex business logic lookup tables\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n","format":"objectid"},"extract":{"type":"string","description":"JSON path expression that extracts a specific value from the cached lookup object.\n\n**Purpose**\n\nWhen using dynamic lookups with a LookupCache, this JSON path identifies which field to extract\nfrom the cached object after it has been retrieved using the lookup key.\n\n**Implementation details**\n\n- Must use JSON path syntax (similar to mapping extract fields)\n- Operates on the cached object returned by the lookup operation\n- Examples:\n  - \"$.name\" - Extract the name field from the top level\n  - \"$.details.price\" - Extract a nested price field\n  - \"$.attributes[0].value\" - Extract a value from the first element of an array\n\n**Usage scenario**\n\nWhen a lookup cache contains complex objects:\n```json\n// Cache entry for key \"PROD-123\":\n{\n  \"id\": \"PROD-123\",\n  \"name\": \"Premium Widget\",\n  \"details\": {\n    \"price\": 99.99,\n    \"currency\": \"USD\",\n    \"inStock\": true\n  }\n}\n```\n\nSetting extract to \"$.details.price\" would return 99.99 as the lookup result.\n\nIf no extract is provided, the entire cached object is returned as the lookup result.\n"},"default":{"type":"string","description":"Default value to use when the source value is not found in the lookup map.\n\nThis value is used as a fallback when:\n1. The source value doesn't match any key in the map\n2. allowFailures is set to true\n\nSetting an appropriate default helps prevent flow failures due to unexpected\nvalues and provides predictable behavior for edge cases.\n\nCommon default patterns include:\n- Descriptive unknowns: \"Unknown Country\", \"Unspecified Status\"\n- Original value indicators: \"{Original Value}\", \"No mapping found\"\n- Neutral values: \"Other\", \"N/A\", \"Miscellaneous\"\n\nIf allowFailures is false and no default is specified, the flow will fail\nwhen encountering unmapped values.\n"},"allowFailures":{"type":"boolean","description":"When true, missing lookup values will use the default value rather than causing an error.\n\n**Behavior control**\n\nThis field determines how the system handles source values that don't exist in the map:\n\n- true: Use the default value for missing mappings and continue processing\n- false: Treat missing mappings as errors, failing the record\n\n**Recommendation**\n\nSet this to true when:\n- New source values might appear over time\n- Data quality issues could introduce unexpected values\n- Processing should continue even with imperfect mapping\n\nSet this to false when:\n- Complete data accuracy is critical\n- All possible source values are known and controlled\n- Missing mappings indicate serious data problems that should be addressed\n\nThe best practice is typically to set allowFailures to true with a meaningful\ndefault value, so flows remain operational while alerting you to missing mappings.\n"}}}},"Output":{"type":"object","description":"Configuration for the tool's output processing.\n\nDefines how the tool's results are mapped, transformed, and enriched\nbefore being returned. Supports field mappings, lookups for data\nenrichment, and custom script hooks for pre/post-mapping processing.\n","properties":{"name":{"type":"string","maxLength":200,"description":"Display name for the output configuration.\n"},"description":{"type":"string","maxLength":10240,"description":"Description of the output data and its purpose.\n"},"schema":{"type":"object","description":"JSON Schema describing the output data structure.\n\nUsed for documentation and validation of the tool's output.\nMust be a valid JSON Schema document.\n","additionalProperties":true},"mappings":{"type":"array","description":"Field mappings to transform data into the output format.\n\nMaps data from processing results to the output structure.\nUses Celigo's standard mapping format with extract/generate field paths.\n","items":{"$ref":"#/components/schemas/Mappings"}},"lookups":{"type":"array","description":"Lookup tables for data enrichment during output processing.\n\nStatic key-value mappings used to translate values (e.g., status codes,\ncategory names) during output generation.\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Name of the lookup, used to reference it from mappings.\n"},"map":{"type":"object","description":"Key-value mapping object. Keys are the input values and\nvalues are the corresponding output values.\n","additionalProperties":true},"default":{"type":"string","description":"Default value returned when the input key is not found in the map.\n"},"allowFailures":{"type":"boolean","description":"Whether to continue processing if the lookup fails to find a match\nand no default is provided.\n"}},"required":["name"]}},"hooks":{"type":"object","description":"Custom script hooks for pre- and post-mapping processing.\n\nAllows running custom JavaScript functions before and after\noutput mappings are applied.\n","properties":{"preMap":{"type":"object","description":"Script to run before applying output mappings.\n\nCan modify the data before it is mapped to the output structure.\n","properties":{"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource"},"function":{"type":"string","description":"Function name to execute within the script"}}},"postMap":{"type":"object","description":"Script to run after applying output mappings.\n\nCan modify the final output data after mappings are applied.\n","properties":{"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource"},"function":{"type":"string","description":"Function name to execute within the script"}}}}},"mockInput":{"type":"object","description":"Mock data for testing the tool's output processing.\n\nProvides sample data that would arrive from the routing/processing\nstage, used to test mapping and lookup logic. Maximum size: 1MB.\n","additionalProperties":true}}},"Router":{"type":"object","description":"Configuration for conditional routing within a tool.\n\nRouters evaluate input data and direct it to different processing branches\nbased on criteria. This enables complex business logic and conditional\nprocessing within the tool.\n\nUnlike flows, tools only support \"first_matching_branch\" routing strategy.\nBranches can chain to other routers or use the special \"outputRouter\"\nterminal sink to exit the tool and return results.\n","properties":{"id":{"type":"string","description":"Unique identifier for this router within the tool.\n\nUsed to reference this router from other routers' branch `nextRouterId`.\n"},"name":{"type":"string","maxLength":300,"description":"Human-readable name for the router.\n"},"routeRecordsTo":{"type":"string","enum":["first_matching_branch"],"description":"Routing strategy. Tools only support \"first_matching_branch\",\nwhich routes to the first branch whose criteria match the input.\n"},"routeRecordsUsing":{"type":"string","enum":["input_filters","script"],"description":"Method used to evaluate routing criteria.\n\n- **input_filters**: Use declarative filter expressions on each branch\n- **script**: Use a custom JavaScript function to determine the branch\n"},"script":{"type":"object","description":"Script configuration when routeRecordsUsing is \"script\".\n\nThe function should return the name of the branch to route to.\n","properties":{"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource"},"function":{"type":"string","description":"Function name that returns the branch name"}}},"branches":{"type":"array","description":"List of branches defining different processing paths.\n\nEach branch has optional filter criteria and a set of processing steps.\nRecords are evaluated against branch criteria in order; the first\nmatching branch is selected.\n","items":{"type":"object","properties":{"name":{"type":"string","maxLength":300,"description":"Name of this branch.\n"},"description":{"type":"string","maxLength":10240,"description":"Description of when and why this branch is selected.\n"},"inputFilter":{"type":"object","description":"Filter criteria to determine if this branch should be selected.\n\nUses Celigo's expression-based filter format.\n","properties":{"version":{"type":"string","enum":["1"],"description":"Filter version"},"rules":{"type":"array","description":"Filter rules in Celigo expression-based filter format.\n\nArray-based DSL where the first element is an operator (e.g., \"equals\", \"and\", \"or\"),\nfollowed by operands which can be nested expressions.\n","items":{}}}},"nextRouterId":{"type":"string","description":"Identifier of the next router to chain to after this branch completes.\n\nUse \"outputRouter\" as a special terminal value to exit the tool\nand return the processing results.\n"},"pageProcessors":{"type":"array","description":"Processing steps to execute in this branch.\n\nEach processor references an export (lookup) or import resource\nfor data retrieval or submission.\n","items":{"type":"object","properties":{"type":{"type":"string","enum":["export","import"],"description":"Type of processor.\n\n- **export**: Retrieves data from an external system (lookup)\n- **import**: Sends data to an external system\n"},"_exportId":{"type":"string","format":"objectId","description":"Export resource reference (when type is \"export\")"},"_importId":{"type":"string","format":"objectId","description":"Import resource reference (when type is \"import\")"},"proceedOnFailure":{"type":"boolean","description":"Whether to continue processing subsequent steps if this\nprocessor fails.\n"},"responseMapping":{"type":"object","description":"Mapping configuration for the processor's response data.\n","properties":{"fields":{"type":"array","description":"Simple field-level mappings","items":{"type":"object","properties":{"extract":{"type":"string","description":"Source field path from the response"},"generate":{"type":"string","description":"Target field path in the record"}}}},"lists":{"type":"array","description":"List-level mappings for array data","items":{"type":"object","properties":{"generate":{"type":"string","description":"Target list path"},"fields":{"type":"array","items":{"type":"object","properties":{"extract":{"type":"string","description":"Source field path"},"generate":{"type":"string","description":"Target field path"}}}}}}}}},"hooks":{"type":"object","description":"Custom scripts for processing","properties":{"postResponseMap":{"type":"object","description":"Script to run after response mapping","properties":{"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource"},"function":{"type":"string","description":"Function name to execute"}}}}}}}}}}}},"required":["id","branches"]},"AIDescription":{"type":"object","description":"AI-generated descriptions and documentation for the resource.\n\nThis object contains automatically generated content that helps users\nunderstand the purpose, behavior, and configuration of the resource without\nrequiring them to analyze the technical details. The AI-generated content\nis sanitized and safe for display in the UI.\n","properties":{"summary":{"type":"string","description":"Brief AI-generated summary of the resource's purpose and functionality.\n\nThis concise description provides a quick overview of what the resource does,\nwhat systems it interacts with, and its primary role in the integration.\nThe summary is suitable for display in list views, dashboards, and other\ncontexts where space is limited.\n\nMaximum length: 10KB\n"},"detailed":{"type":"string","description":"Comprehensive AI-generated description of the resource's functionality.\n\nThis detailed explanation covers the resource's purpose, configuration details,\ndata flow patterns, filtering logic, and other technical aspects. It provides\nin-depth information suitable for documentation, tooltips, or detailed views\nin the administration interface.\n\nThe content may include HTML formatting for improved readability.\n\nMaximum length: 10KB\n"},"generatedOn":{"type":"string","format":"date-time","description":"Timestamp indicating when the AI description was generated.\n\nThis field helps track the freshness of the AI-generated content and\ndetermine when it might need to be regenerated due to changes in the\nresource's configuration or behavior.\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"}}}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}}}},"paths":{"/v1/tools":{"get":{"summary":"List tools","description":"Returns a list of all tools configured in the account.\nIf no tools exist in the account, a 204 response with no body will be returned.\n\nResults can be filtered by integration using the `_integrationId` query parameter.\n","operationId":"listTools","tags":["Tools"],"parameters":[{"name":"_integrationId","in":"query","description":"Filter tools by integration identifier","required":false,"schema":{"type":"string"}},{"$ref":"#/components/parameters/Include"},{"$ref":"#/components/parameters/Exclude"}],"responses":{"200":{"description":"Successfully retrieved list of tools","headers":{"Link":{"description":"RFC-5988 pagination links. When more pages remain, includes a `<...>; rel=\"next\"` entry;\nabsent on the final page.\n","schema":{"type":"string"}}},"content":{"application/json":{"schema":{"type":"array","items":{"$ref":"#/components/schemas/Response"}}}}},"204":{"description":"No tools exist in the account"},"401":{"$ref":"#/components/responses/401-unauthorized"}}}}}}
````

## Create a tool

> Creates a new tool within an integration.\
> \
> A tool defines reusable processing logic including input transformation,\
> conditional routing through branches, and output mapping with lookups and hooks.<br>

````json
{"openapi":"3.1.0","info":{"title":"Tools","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"Request":{"type":"object","description":"Request schema for creating or updating a tool.\n\nTools are reusable processing units that encapsulate input transformation,\nconditional routing, and output mapping logic within an integration.\n","properties":{"name":{"type":"string","maxLength":200,"description":"Human-readable name for the tool.\n\nDisplayed in the UI and used to identify the tool's purpose.\n"},"description":{"type":"string","maxLength":10240,"description":"Optional detailed description of what the tool does.\n\nUse this to document the tool's purpose, expected inputs/outputs,\nand any special considerations.\n"},"_integrationId":{"type":"string","format":"objectId","description":"Reference to the integration this tool belongs to.\n\nEvery tool must be associated with an integration. The integration\ndetermines the scope and access controls for the tool.\n"},"input":{"$ref":"#/components/schemas/Input"},"output":{"$ref":"#/components/schemas/Output"},"routers":{"type":"array","description":"Optional routers for conditional processing logic.\n\nRouters allow you to direct input data to different processing branches\nbased on filter criteria or script logic. Tools only support\n\"first_matching_branch\" routing strategy.\n\nBranches can chain to other routers or use the special \"outputRouter\"\nterminal value to exit the tool.\n","items":{"$ref":"#/components/schemas/Router"}},"aiDescription":{"$ref":"#/components/schemas/AIDescription"}},"required":["name","_integrationId"]},"Input":{"type":"object","description":"Configuration for the tool's input processing.\n\nDefines the expected input structure, optional transformations to apply\nbefore routing, and mock data for testing.\n","properties":{"name":{"type":"string","maxLength":200,"description":"Display name for the input configuration.\n"},"description":{"type":"string","maxLength":10240,"description":"Description of the expected input data and its purpose.\n"},"schema":{"type":"object","description":"JSON Schema describing the expected input data structure.\n\nUsed for validation, documentation, and AI-assisted tooling.\nMust be a valid JSON Schema document.\n","additionalProperties":true},"transform":{"$ref":"#/components/schemas/Transform"},"mockInput":{"type":"object","description":"Mock data for testing the tool's input processing.\n\nProvides sample input to test transformation logic and routing\nwithout requiring live data. Maximum size: 1MB.\n","additionalProperties":true}}},"Transform":{"type":"object","description":"Configuration for transforming data during processing operations. This object enables\nreshaping of records.\n\n**Transformation capabilities**\n\nCeligo's transformation engine offers powerful features for data manipulation:\n- Precise field mapping with JSONPath expressions\n- Support for any level of nested arrays\n- Formula-based field value generation\n- Dynamic references to flow and integration settings\n\n**Implementation approaches**\n\nThere are two distinct transformation mechanisms available:\n\n**Rule-Based Transformation (`type: \"expression\"`)**\n- **Best For**: Most transformation scenarios from simple to complex\n- **Capabilities**: Field mapping, formula calculations, lookups, nested data handling\n- **Advantages**: Visual configuration, no coding required, intuitive interface\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear mapping requirements or need to reshape data structure\n\n**Script-Based Transformation (`type: \"script\"`)**\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Capabilities**: Full programmatic control, custom processing, complex business rules\n- **Advantages**: Maximum flexibility, can implement any transformation logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Visual transformation tools aren't sufficient for your use case\n","properties":{"type":{"type":"string","description":"Determines which transformation mechanism to use. This choice affects which properties\nmust be configured and how transformation logic is implemented.\n\n**Available types**\n\n**Rule-Based Transformation (`\"expression\"`)**\n- **Required Config**: The `expression` object with mapping definitions\n- **Behavior**: Applies declarative rules to reshape data\n- **Best For**: Most transformation scenarios from simple to complex\n- **Advantages**: Visual configuration, no coding required\n\n**Script-Based Transformation (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to transform data\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard data transformations, use `\"expression\"`\n2. For complex logic or specialized processing, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based transformations. This object enables reshaping data\nwithout requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define transformation rules that\ncan map, modify, and generate data elements.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Field mappings define how input data is transformed to target fields\n- Formulas can be used to calculate or generate new values\n- Lookups can enrich data by fetching related information\n- Mode determines how records are processed (create new or modify existing)\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"2\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"2\"\nfor current implementations.\n","enum":["2"]},"rulesTwoDotZero":{"type":"object","description":"Configuration for version 2 transformation rules. This object contains the core logic\nfor how data is mapped, enriched, and transformed.\n\n**Capabilities**\n\nTransformation 2.0 provides:\n- Precise field mapping with JSONPath expressions\n- Support for deeply nested data structures\n- Formula-based field generation\n- Dynamic lookups for data enrichment\n- Multiple operating modes to fit different scenarios\n","properties":{"mode":{"type":"string","description":"Transformation mode that determines how records are handled during processing.\n\n**Available modes**\n\n**Create Mode (`\"create\"`)**\n- **Behavior**: Builds entirely new output records from inputs\n- **Use When**: Output structure differs significantly from input\n- **Advantage**: Clean slate approach, no field inheritance\n\n**Modify Mode (`\"modify\"`)**\n- **Behavior**: Makes targeted edits to existing records\n- **Use When**: Output structure should remain similar to input\n- **Advantage**: Preserves unmapped fields from the original record\n","enum":["create","modify"]},"mappings":{"$ref":"#/components/schemas/Mappings"},"lookups":{"allOf":[{"description":"Shared lookup tables used across all mappings defined in the transformation rules.\n\n**Purpose**\n\nLookups provide centralized value translation that can be referenced from any mapping\nin your transformation configuration. They enable consistent translation of codes, IDs,\nand values between systems without duplicating translation logic.\n\n**Usage in transformations**\n\nLookups are particularly valuable in transformations for:\n\n- **Data Normalization**: Standardizing values from diverse source systems\n- **Code Translation**: Converting between different coding systems (e.g., status codes)\n- **Field Enrichment**: Adding descriptive values based on ID or code lookups\n- **Cross-Reference Resolution**: Mapping identifiers between integrated systems\n\n**Implementation**\n\nLookups are defined once in this array and referenced by name in mappings:\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"statusMapping\",\n    \"map\": {\n      \"A\": \"Active\",\n      \"I\": \"Inactive\",\n      \"P\": \"Pending\"\n    },\n    \"default\": \"Unknown Status\"\n  }\n]\n```\n\nThen referenced in mappings using the lookupName property:\n\n```json\n{\n  \"generate\": \"status\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.statusCode\",\n  \"lookupName\": \"statusMapping\"\n}\n```\n\nThe system automatically applies the lookup during transformation processing.\n\nFor complete details on lookup properties and behavior, see the Lookups schema.\n"},{"$ref":"#/components/schemas/Lookups"}]}}}}},"script":{"type":"object","description":"Configuration for programmable script-based transformations. This object enables complex, custom\ntransformation logic beyond what expression-based transformations can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to transform data according to\nspecialized business rules or complex algorithms.\n\n**Implementation approach**\n\nScript-based transformation works by:\n1. Executing the specified function from the referenced script\n2. Passing input data to the function\n3. Using the function's return value as the transformed output\n\n**Common use cases**\n\nScript transformation is ideal for:\n- Complex business logic that can't be expressed through mappings\n- Algorithmic transformations requiring computation\n- Dynamic transformations based on external factors\n- Legacy system data format compatibility\n- Multi-stage processing with intermediate steps\n\nOnly use script-based transformation when expression-based transformation is insufficient.\nScript transformation requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the transformation logic.\n\nThe referenced script should contain the function specified in the\n'function' property.\n","format":"objectid"},"function":{"type":"string","description":"Name of the function within the script to execute for transformation. This function\nmust exist in the script referenced by _scriptId.\n"}}}}},"Mappings":{"type":"array","description":"Array of field mapping configurations for transforming data from one format into another.\n\n**Guidance**\n\nThis schema is designed around RECURSION as its core architectural principle. Understanding this recursive\nnature is essential for building effective mappings:\n\n1. The schema is self-referential by design - a mapping can contain nested mappings of the same structure\n2. Complex data structures (nested objects, arrays of objects, arrays of arrays of objects) are ALL\n   handled through this recursive pattern\n3. Each mapping handles one level of the data structure; deeper levels are handled by nested mappings\n\nWhen generating mappings programmatically:\n- For simple fields (string, number, boolean): Create single mapping objects\n- For objects: Create a parent mapping with nested 'mappings' array containing child field mappings\n- For arrays: Use 'buildArrayHelper' with extract paths defining array inputs and\n  recursive 'mappings' to define object structures\n\nThe system will process these nested structures recursively during runtime, ensuring proper construction\nof complex hierarchical data while maintaining excellent performance.\n","items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}}},"items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}},"Lookups":{"type":"array","description":"Configuration for value-to-value transformations using lookup tables.\n\n**Purpose**\n\nLookups provide a way to translate values from one system to another. They transform\ninput values into output values using either static mapping tables or\ndynamic lookup caches.\n\n**Lookup mechanisms**\n\nThere are two distinct lookup mechanisms available:\n\n1. **Static Lookups**: Define a simple key-value map object and store it as part of your resource\n   - Best for: Small, fixed sets of values that rarely change\n   - Implementation: Configure the `map` object with input-to-output value mappings\n   - Example: Country codes, status values, simple translations\n\n2. **Dynamic Lookups**: Reference an existing 'Lookup Cache' resource in your Celigo account\n   - Best for: Large datasets, frequently changing values, or complex reference data\n   - Implementation: Configure `_lookupCacheId` to reference cached data maintained independently\n   - Example: Product catalogs, customer databases, pricing information\n\n**Property usage**\n\nThere are two mutually exclusive ways to configure lookups, depending on which mechanism you choose:\n\n1. **For Static Mappings**: Configure the `map` property with a direct key-value object\n   ```json\n   \"map\": {\"US\": \"United States\", \"CA\": \"Canada\"}\n   ```\n\n2. **For Dynamic Lookups**: Configure the following properties:\n   - `_lookupCacheId`: Reference to the lookup cache resource\n   - `extract`: JSON path to extract specific value from the returned lookup object\n\n**When to use**\n\nLookups are ideal for:\n\n1. **Value Translation**: Mapping codes or IDs to human-readable values\n\n2. **Data Enrichment**: Adding related information to records during processing\n\n3. **Normalization**: Ensuring consistent formatting of values across systems\n\n**Implementation details**\n\nLookups can be referenced in:\n\n1. **Field Mappings**: Direct use in field transformation configurations\n\n2. **Handlebars Templates**: Use within templates with the syntax:\n   ```\n   {{lookup 'lookupName' record.fieldName}}\n   ```\n\n**Example usage**\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"countryCodeToName\",\n    \"map\": {\n      \"US\": \"United States\",\n      \"CA\": \"Canada\",\n      \"UK\": \"United Kingdom\"\n    },\n    \"default\": \"Unknown Country\",\n    \"allowFailures\": true\n  },\n  {\n    \"name\": \"productDetails\",\n    \"_lookupCacheId\": \"60a2c4e6f321d800129a1a3c\",\n    \"extract\": \"$.details.price\",\n    \"allowFailures\": false\n  }\n]\n```\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique identifier for the lookup table within this configuration.\n\nThis name must be unique within the scope where the lookup is defined and is used to reference\nthe lookup in handlebars templates with the syntax {{lookup 'name' value}}.\n\nChoose descriptive names that indicate the transformation purpose, such as:\n- \"countryCodeToName\" for country code to full name conversion\n- \"statusMapping\" for status code translations\n- \"departmentCodes\" for department code to name mapping\n"},"map":{"type":"object","description":"The lookup mapping table as key-value pairs.\n\nThis object contains the input values as keys and their corresponding\noutput values. When a input value matches a key in this object,\nit will be replaced with the corresponding value.\n\nThe map should be kept to a reasonable size (typically under 100 entries)\nfor optimal performance. For larger mapping requirements, consider using\ndynamic lookups instead.\n\nMaps can include:\n- Simple code to name conversions: {\"US\": \"United States\"}\n- Status transformations: {\"A\": \"Active\", \"I\": \"Inactive\"}\n- ID to name mappings: {\"100\": \"Marketing\", \"200\": \"Sales\"}\n\nValues can be strings, numbers, or booleans, but all are stored as strings\nin the configuration.\n"},"_lookupCacheId":{"type":"string","description":"Reference to a LookupCache resource that contains the reference data for the lookup.\n\n**Purpose**\n\nThis field connects the lookup to an external data source that has been cached in the system.\nUnlike static lookups that use the `map` property, dynamic lookups can reference large datasets\nor frequently changing information without requiring constant updates to the integration.\n\n**Implementation details**\n\nThe LookupCache resource referenced by this ID contains:\n- The data records to be used as a reference source\n- Configuration for how the data should be indexed and accessed\n- Caching parameters to balance performance with data freshness\n\n**Usage patterns**\n\nCommonly used to reference:\n- Product catalogs or SKU databases\n- Customer or account information\n- Pricing tables or discount rules\n- Complex business logic lookup tables\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n","format":"objectid"},"extract":{"type":"string","description":"JSON path expression that extracts a specific value from the cached lookup object.\n\n**Purpose**\n\nWhen using dynamic lookups with a LookupCache, this JSON path identifies which field to extract\nfrom the cached object after it has been retrieved using the lookup key.\n\n**Implementation details**\n\n- Must use JSON path syntax (similar to mapping extract fields)\n- Operates on the cached object returned by the lookup operation\n- Examples:\n  - \"$.name\" - Extract the name field from the top level\n  - \"$.details.price\" - Extract a nested price field\n  - \"$.attributes[0].value\" - Extract a value from the first element of an array\n\n**Usage scenario**\n\nWhen a lookup cache contains complex objects:\n```json\n// Cache entry for key \"PROD-123\":\n{\n  \"id\": \"PROD-123\",\n  \"name\": \"Premium Widget\",\n  \"details\": {\n    \"price\": 99.99,\n    \"currency\": \"USD\",\n    \"inStock\": true\n  }\n}\n```\n\nSetting extract to \"$.details.price\" would return 99.99 as the lookup result.\n\nIf no extract is provided, the entire cached object is returned as the lookup result.\n"},"default":{"type":"string","description":"Default value to use when the source value is not found in the lookup map.\n\nThis value is used as a fallback when:\n1. The source value doesn't match any key in the map\n2. allowFailures is set to true\n\nSetting an appropriate default helps prevent flow failures due to unexpected\nvalues and provides predictable behavior for edge cases.\n\nCommon default patterns include:\n- Descriptive unknowns: \"Unknown Country\", \"Unspecified Status\"\n- Original value indicators: \"{Original Value}\", \"No mapping found\"\n- Neutral values: \"Other\", \"N/A\", \"Miscellaneous\"\n\nIf allowFailures is false and no default is specified, the flow will fail\nwhen encountering unmapped values.\n"},"allowFailures":{"type":"boolean","description":"When true, missing lookup values will use the default value rather than causing an error.\n\n**Behavior control**\n\nThis field determines how the system handles source values that don't exist in the map:\n\n- true: Use the default value for missing mappings and continue processing\n- false: Treat missing mappings as errors, failing the record\n\n**Recommendation**\n\nSet this to true when:\n- New source values might appear over time\n- Data quality issues could introduce unexpected values\n- Processing should continue even with imperfect mapping\n\nSet this to false when:\n- Complete data accuracy is critical\n- All possible source values are known and controlled\n- Missing mappings indicate serious data problems that should be addressed\n\nThe best practice is typically to set allowFailures to true with a meaningful\ndefault value, so flows remain operational while alerting you to missing mappings.\n"}}}},"Output":{"type":"object","description":"Configuration for the tool's output processing.\n\nDefines how the tool's results are mapped, transformed, and enriched\nbefore being returned. Supports field mappings, lookups for data\nenrichment, and custom script hooks for pre/post-mapping processing.\n","properties":{"name":{"type":"string","maxLength":200,"description":"Display name for the output configuration.\n"},"description":{"type":"string","maxLength":10240,"description":"Description of the output data and its purpose.\n"},"schema":{"type":"object","description":"JSON Schema describing the output data structure.\n\nUsed for documentation and validation of the tool's output.\nMust be a valid JSON Schema document.\n","additionalProperties":true},"mappings":{"type":"array","description":"Field mappings to transform data into the output format.\n\nMaps data from processing results to the output structure.\nUses Celigo's standard mapping format with extract/generate field paths.\n","items":{"$ref":"#/components/schemas/Mappings"}},"lookups":{"type":"array","description":"Lookup tables for data enrichment during output processing.\n\nStatic key-value mappings used to translate values (e.g., status codes,\ncategory names) during output generation.\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Name of the lookup, used to reference it from mappings.\n"},"map":{"type":"object","description":"Key-value mapping object. Keys are the input values and\nvalues are the corresponding output values.\n","additionalProperties":true},"default":{"type":"string","description":"Default value returned when the input key is not found in the map.\n"},"allowFailures":{"type":"boolean","description":"Whether to continue processing if the lookup fails to find a match\nand no default is provided.\n"}},"required":["name"]}},"hooks":{"type":"object","description":"Custom script hooks for pre- and post-mapping processing.\n\nAllows running custom JavaScript functions before and after\noutput mappings are applied.\n","properties":{"preMap":{"type":"object","description":"Script to run before applying output mappings.\n\nCan modify the data before it is mapped to the output structure.\n","properties":{"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource"},"function":{"type":"string","description":"Function name to execute within the script"}}},"postMap":{"type":"object","description":"Script to run after applying output mappings.\n\nCan modify the final output data after mappings are applied.\n","properties":{"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource"},"function":{"type":"string","description":"Function name to execute within the script"}}}}},"mockInput":{"type":"object","description":"Mock data for testing the tool's output processing.\n\nProvides sample data that would arrive from the routing/processing\nstage, used to test mapping and lookup logic. Maximum size: 1MB.\n","additionalProperties":true}}},"Router":{"type":"object","description":"Configuration for conditional routing within a tool.\n\nRouters evaluate input data and direct it to different processing branches\nbased on criteria. This enables complex business logic and conditional\nprocessing within the tool.\n\nUnlike flows, tools only support \"first_matching_branch\" routing strategy.\nBranches can chain to other routers or use the special \"outputRouter\"\nterminal sink to exit the tool and return results.\n","properties":{"id":{"type":"string","description":"Unique identifier for this router within the tool.\n\nUsed to reference this router from other routers' branch `nextRouterId`.\n"},"name":{"type":"string","maxLength":300,"description":"Human-readable name for the router.\n"},"routeRecordsTo":{"type":"string","enum":["first_matching_branch"],"description":"Routing strategy. Tools only support \"first_matching_branch\",\nwhich routes to the first branch whose criteria match the input.\n"},"routeRecordsUsing":{"type":"string","enum":["input_filters","script"],"description":"Method used to evaluate routing criteria.\n\n- **input_filters**: Use declarative filter expressions on each branch\n- **script**: Use a custom JavaScript function to determine the branch\n"},"script":{"type":"object","description":"Script configuration when routeRecordsUsing is \"script\".\n\nThe function should return the name of the branch to route to.\n","properties":{"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource"},"function":{"type":"string","description":"Function name that returns the branch name"}}},"branches":{"type":"array","description":"List of branches defining different processing paths.\n\nEach branch has optional filter criteria and a set of processing steps.\nRecords are evaluated against branch criteria in order; the first\nmatching branch is selected.\n","items":{"type":"object","properties":{"name":{"type":"string","maxLength":300,"description":"Name of this branch.\n"},"description":{"type":"string","maxLength":10240,"description":"Description of when and why this branch is selected.\n"},"inputFilter":{"type":"object","description":"Filter criteria to determine if this branch should be selected.\n\nUses Celigo's expression-based filter format.\n","properties":{"version":{"type":"string","enum":["1"],"description":"Filter version"},"rules":{"type":"array","description":"Filter rules in Celigo expression-based filter format.\n\nArray-based DSL where the first element is an operator (e.g., \"equals\", \"and\", \"or\"),\nfollowed by operands which can be nested expressions.\n","items":{}}}},"nextRouterId":{"type":"string","description":"Identifier of the next router to chain to after this branch completes.\n\nUse \"outputRouter\" as a special terminal value to exit the tool\nand return the processing results.\n"},"pageProcessors":{"type":"array","description":"Processing steps to execute in this branch.\n\nEach processor references an export (lookup) or import resource\nfor data retrieval or submission.\n","items":{"type":"object","properties":{"type":{"type":"string","enum":["export","import"],"description":"Type of processor.\n\n- **export**: Retrieves data from an external system (lookup)\n- **import**: Sends data to an external system\n"},"_exportId":{"type":"string","format":"objectId","description":"Export resource reference (when type is \"export\")"},"_importId":{"type":"string","format":"objectId","description":"Import resource reference (when type is \"import\")"},"proceedOnFailure":{"type":"boolean","description":"Whether to continue processing subsequent steps if this\nprocessor fails.\n"},"responseMapping":{"type":"object","description":"Mapping configuration for the processor's response data.\n","properties":{"fields":{"type":"array","description":"Simple field-level mappings","items":{"type":"object","properties":{"extract":{"type":"string","description":"Source field path from the response"},"generate":{"type":"string","description":"Target field path in the record"}}}},"lists":{"type":"array","description":"List-level mappings for array data","items":{"type":"object","properties":{"generate":{"type":"string","description":"Target list path"},"fields":{"type":"array","items":{"type":"object","properties":{"extract":{"type":"string","description":"Source field path"},"generate":{"type":"string","description":"Target field path"}}}}}}}}},"hooks":{"type":"object","description":"Custom scripts for processing","properties":{"postResponseMap":{"type":"object","description":"Script to run after response mapping","properties":{"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource"},"function":{"type":"string","description":"Function name to execute"}}}}}}}}}}}},"required":["id","branches"]},"AIDescription":{"type":"object","description":"AI-generated descriptions and documentation for the resource.\n\nThis object contains automatically generated content that helps users\nunderstand the purpose, behavior, and configuration of the resource without\nrequiring them to analyze the technical details. The AI-generated content\nis sanitized and safe for display in the UI.\n","properties":{"summary":{"type":"string","description":"Brief AI-generated summary of the resource's purpose and functionality.\n\nThis concise description provides a quick overview of what the resource does,\nwhat systems it interacts with, and its primary role in the integration.\nThe summary is suitable for display in list views, dashboards, and other\ncontexts where space is limited.\n\nMaximum length: 10KB\n"},"detailed":{"type":"string","description":"Comprehensive AI-generated description of the resource's functionality.\n\nThis detailed explanation covers the resource's purpose, configuration details,\ndata flow patterns, filtering logic, and other technical aspects. It provides\nin-depth information suitable for documentation, tooltips, or detailed views\nin the administration interface.\n\nThe content may include HTML formatting for improved readability.\n\nMaximum length: 10KB\n"},"generatedOn":{"type":"string","format":"date-time","description":"Timestamp indicating when the AI description was generated.\n\nThis field helps track the freshness of the AI-generated content and\ndetermine when it might need to be regenerated due to changes in the\nresource's configuration or behavior.\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"}}},"Response":{"type":"object","description":"Response schema for tool operations.\n\nContains the complete tool configuration including metadata, input/output\nsettings, routing logic, and AI-generated descriptions.\n","properties":{"_id":{"type":"string","format":"objectId","readOnly":true,"description":"Unique identifier for the tool"},"name":{"type":"string","description":"Human-readable name for the tool"},"description":{"type":"string","description":"Detailed description of the tool's purpose"},"_integrationId":{"type":"string","format":"objectId","description":"Reference to the parent integration"},"input":{"$ref":"#/components/schemas/Input"},"output":{"$ref":"#/components/schemas/Output"},"routers":{"type":"array","description":"Routing configuration for conditional processing.\n\nOnly present when the tool has routers configured.\n","items":{"$ref":"#/components/schemas/Router"}},"aiDescription":{"$ref":"#/components/schemas/AIDescription"},"createdAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the tool was created"},"lastModified":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the tool was last modified"},"deletedAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the tool was soft-deleted.\n\nOnly present for deleted tools. The tool will be permanently\nremoved 30 days after this timestamp.\n"}}},"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}},"responses":{"400-bad-request":{"description":"Bad request. The server could not understand the request because of malformed syntax or invalid parameters.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}},"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"422-unprocessable-entity":{"description":"Unprocessable entity. The request was well-formed but was unable to be followed due to semantic errors.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}}},"paths":{"/v1/tools":{"post":{"summary":"Create a tool","description":"Creates a new tool within an integration.\n\nA tool defines reusable processing logic including input transformation,\nconditional routing through branches, and output mapping with lookups and hooks.\n","operationId":"createTool","tags":["Tools"],"requestBody":{"required":true,"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Request"}}}},"responses":{"201":{"description":"Tool created successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Response"}}}},"400":{"$ref":"#/components/responses/400-bad-request"},"401":{"$ref":"#/components/responses/401-unauthorized"},"422":{"$ref":"#/components/responses/422-unprocessable-entity"}}}}}}
````

## Get a tool

> Returns the complete configuration of a specific tool.<br>

````json
{"openapi":"3.1.0","info":{"title":"Tools","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"Response":{"type":"object","description":"Response schema for tool operations.\n\nContains the complete tool configuration including metadata, input/output\nsettings, routing logic, and AI-generated descriptions.\n","properties":{"_id":{"type":"string","format":"objectId","readOnly":true,"description":"Unique identifier for the tool"},"name":{"type":"string","description":"Human-readable name for the tool"},"description":{"type":"string","description":"Detailed description of the tool's purpose"},"_integrationId":{"type":"string","format":"objectId","description":"Reference to the parent integration"},"input":{"$ref":"#/components/schemas/Input"},"output":{"$ref":"#/components/schemas/Output"},"routers":{"type":"array","description":"Routing configuration for conditional processing.\n\nOnly present when the tool has routers configured.\n","items":{"$ref":"#/components/schemas/Router"}},"aiDescription":{"$ref":"#/components/schemas/AIDescription"},"createdAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the tool was created"},"lastModified":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the tool was last modified"},"deletedAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the tool was soft-deleted.\n\nOnly present for deleted tools. The tool will be permanently\nremoved 30 days after this timestamp.\n"}}},"Input":{"type":"object","description":"Configuration for the tool's input processing.\n\nDefines the expected input structure, optional transformations to apply\nbefore routing, and mock data for testing.\n","properties":{"name":{"type":"string","maxLength":200,"description":"Display name for the input configuration.\n"},"description":{"type":"string","maxLength":10240,"description":"Description of the expected input data and its purpose.\n"},"schema":{"type":"object","description":"JSON Schema describing the expected input data structure.\n\nUsed for validation, documentation, and AI-assisted tooling.\nMust be a valid JSON Schema document.\n","additionalProperties":true},"transform":{"$ref":"#/components/schemas/Transform"},"mockInput":{"type":"object","description":"Mock data for testing the tool's input processing.\n\nProvides sample input to test transformation logic and routing\nwithout requiring live data. Maximum size: 1MB.\n","additionalProperties":true}}},"Transform":{"type":"object","description":"Configuration for transforming data during processing operations. This object enables\nreshaping of records.\n\n**Transformation capabilities**\n\nCeligo's transformation engine offers powerful features for data manipulation:\n- Precise field mapping with JSONPath expressions\n- Support for any level of nested arrays\n- Formula-based field value generation\n- Dynamic references to flow and integration settings\n\n**Implementation approaches**\n\nThere are two distinct transformation mechanisms available:\n\n**Rule-Based Transformation (`type: \"expression\"`)**\n- **Best For**: Most transformation scenarios from simple to complex\n- **Capabilities**: Field mapping, formula calculations, lookups, nested data handling\n- **Advantages**: Visual configuration, no coding required, intuitive interface\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear mapping requirements or need to reshape data structure\n\n**Script-Based Transformation (`type: \"script\"`)**\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Capabilities**: Full programmatic control, custom processing, complex business rules\n- **Advantages**: Maximum flexibility, can implement any transformation logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Visual transformation tools aren't sufficient for your use case\n","properties":{"type":{"type":"string","description":"Determines which transformation mechanism to use. This choice affects which properties\nmust be configured and how transformation logic is implemented.\n\n**Available types**\n\n**Rule-Based Transformation (`\"expression\"`)**\n- **Required Config**: The `expression` object with mapping definitions\n- **Behavior**: Applies declarative rules to reshape data\n- **Best For**: Most transformation scenarios from simple to complex\n- **Advantages**: Visual configuration, no coding required\n\n**Script-Based Transformation (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to transform data\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard data transformations, use `\"expression\"`\n2. For complex logic or specialized processing, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based transformations. This object enables reshaping data\nwithout requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define transformation rules that\ncan map, modify, and generate data elements.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Field mappings define how input data is transformed to target fields\n- Formulas can be used to calculate or generate new values\n- Lookups can enrich data by fetching related information\n- Mode determines how records are processed (create new or modify existing)\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"2\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"2\"\nfor current implementations.\n","enum":["2"]},"rulesTwoDotZero":{"type":"object","description":"Configuration for version 2 transformation rules. This object contains the core logic\nfor how data is mapped, enriched, and transformed.\n\n**Capabilities**\n\nTransformation 2.0 provides:\n- Precise field mapping with JSONPath expressions\n- Support for deeply nested data structures\n- Formula-based field generation\n- Dynamic lookups for data enrichment\n- Multiple operating modes to fit different scenarios\n","properties":{"mode":{"type":"string","description":"Transformation mode that determines how records are handled during processing.\n\n**Available modes**\n\n**Create Mode (`\"create\"`)**\n- **Behavior**: Builds entirely new output records from inputs\n- **Use When**: Output structure differs significantly from input\n- **Advantage**: Clean slate approach, no field inheritance\n\n**Modify Mode (`\"modify\"`)**\n- **Behavior**: Makes targeted edits to existing records\n- **Use When**: Output structure should remain similar to input\n- **Advantage**: Preserves unmapped fields from the original record\n","enum":["create","modify"]},"mappings":{"$ref":"#/components/schemas/Mappings"},"lookups":{"allOf":[{"description":"Shared lookup tables used across all mappings defined in the transformation rules.\n\n**Purpose**\n\nLookups provide centralized value translation that can be referenced from any mapping\nin your transformation configuration. They enable consistent translation of codes, IDs,\nand values between systems without duplicating translation logic.\n\n**Usage in transformations**\n\nLookups are particularly valuable in transformations for:\n\n- **Data Normalization**: Standardizing values from diverse source systems\n- **Code Translation**: Converting between different coding systems (e.g., status codes)\n- **Field Enrichment**: Adding descriptive values based on ID or code lookups\n- **Cross-Reference Resolution**: Mapping identifiers between integrated systems\n\n**Implementation**\n\nLookups are defined once in this array and referenced by name in mappings:\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"statusMapping\",\n    \"map\": {\n      \"A\": \"Active\",\n      \"I\": \"Inactive\",\n      \"P\": \"Pending\"\n    },\n    \"default\": \"Unknown Status\"\n  }\n]\n```\n\nThen referenced in mappings using the lookupName property:\n\n```json\n{\n  \"generate\": \"status\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.statusCode\",\n  \"lookupName\": \"statusMapping\"\n}\n```\n\nThe system automatically applies the lookup during transformation processing.\n\nFor complete details on lookup properties and behavior, see the Lookups schema.\n"},{"$ref":"#/components/schemas/Lookups"}]}}}}},"script":{"type":"object","description":"Configuration for programmable script-based transformations. This object enables complex, custom\ntransformation logic beyond what expression-based transformations can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to transform data according to\nspecialized business rules or complex algorithms.\n\n**Implementation approach**\n\nScript-based transformation works by:\n1. Executing the specified function from the referenced script\n2. Passing input data to the function\n3. Using the function's return value as the transformed output\n\n**Common use cases**\n\nScript transformation is ideal for:\n- Complex business logic that can't be expressed through mappings\n- Algorithmic transformations requiring computation\n- Dynamic transformations based on external factors\n- Legacy system data format compatibility\n- Multi-stage processing with intermediate steps\n\nOnly use script-based transformation when expression-based transformation is insufficient.\nScript transformation requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the transformation logic.\n\nThe referenced script should contain the function specified in the\n'function' property.\n","format":"objectid"},"function":{"type":"string","description":"Name of the function within the script to execute for transformation. This function\nmust exist in the script referenced by _scriptId.\n"}}}}},"Mappings":{"type":"array","description":"Array of field mapping configurations for transforming data from one format into another.\n\n**Guidance**\n\nThis schema is designed around RECURSION as its core architectural principle. Understanding this recursive\nnature is essential for building effective mappings:\n\n1. The schema is self-referential by design - a mapping can contain nested mappings of the same structure\n2. Complex data structures (nested objects, arrays of objects, arrays of arrays of objects) are ALL\n   handled through this recursive pattern\n3. Each mapping handles one level of the data structure; deeper levels are handled by nested mappings\n\nWhen generating mappings programmatically:\n- For simple fields (string, number, boolean): Create single mapping objects\n- For objects: Create a parent mapping with nested 'mappings' array containing child field mappings\n- For arrays: Use 'buildArrayHelper' with extract paths defining array inputs and\n  recursive 'mappings' to define object structures\n\nThe system will process these nested structures recursively during runtime, ensuring proper construction\nof complex hierarchical data while maintaining excellent performance.\n","items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}}},"items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}},"Lookups":{"type":"array","description":"Configuration for value-to-value transformations using lookup tables.\n\n**Purpose**\n\nLookups provide a way to translate values from one system to another. They transform\ninput values into output values using either static mapping tables or\ndynamic lookup caches.\n\n**Lookup mechanisms**\n\nThere are two distinct lookup mechanisms available:\n\n1. **Static Lookups**: Define a simple key-value map object and store it as part of your resource\n   - Best for: Small, fixed sets of values that rarely change\n   - Implementation: Configure the `map` object with input-to-output value mappings\n   - Example: Country codes, status values, simple translations\n\n2. **Dynamic Lookups**: Reference an existing 'Lookup Cache' resource in your Celigo account\n   - Best for: Large datasets, frequently changing values, or complex reference data\n   - Implementation: Configure `_lookupCacheId` to reference cached data maintained independently\n   - Example: Product catalogs, customer databases, pricing information\n\n**Property usage**\n\nThere are two mutually exclusive ways to configure lookups, depending on which mechanism you choose:\n\n1. **For Static Mappings**: Configure the `map` property with a direct key-value object\n   ```json\n   \"map\": {\"US\": \"United States\", \"CA\": \"Canada\"}\n   ```\n\n2. **For Dynamic Lookups**: Configure the following properties:\n   - `_lookupCacheId`: Reference to the lookup cache resource\n   - `extract`: JSON path to extract specific value from the returned lookup object\n\n**When to use**\n\nLookups are ideal for:\n\n1. **Value Translation**: Mapping codes or IDs to human-readable values\n\n2. **Data Enrichment**: Adding related information to records during processing\n\n3. **Normalization**: Ensuring consistent formatting of values across systems\n\n**Implementation details**\n\nLookups can be referenced in:\n\n1. **Field Mappings**: Direct use in field transformation configurations\n\n2. **Handlebars Templates**: Use within templates with the syntax:\n   ```\n   {{lookup 'lookupName' record.fieldName}}\n   ```\n\n**Example usage**\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"countryCodeToName\",\n    \"map\": {\n      \"US\": \"United States\",\n      \"CA\": \"Canada\",\n      \"UK\": \"United Kingdom\"\n    },\n    \"default\": \"Unknown Country\",\n    \"allowFailures\": true\n  },\n  {\n    \"name\": \"productDetails\",\n    \"_lookupCacheId\": \"60a2c4e6f321d800129a1a3c\",\n    \"extract\": \"$.details.price\",\n    \"allowFailures\": false\n  }\n]\n```\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique identifier for the lookup table within this configuration.\n\nThis name must be unique within the scope where the lookup is defined and is used to reference\nthe lookup in handlebars templates with the syntax {{lookup 'name' value}}.\n\nChoose descriptive names that indicate the transformation purpose, such as:\n- \"countryCodeToName\" for country code to full name conversion\n- \"statusMapping\" for status code translations\n- \"departmentCodes\" for department code to name mapping\n"},"map":{"type":"object","description":"The lookup mapping table as key-value pairs.\n\nThis object contains the input values as keys and their corresponding\noutput values. When a input value matches a key in this object,\nit will be replaced with the corresponding value.\n\nThe map should be kept to a reasonable size (typically under 100 entries)\nfor optimal performance. For larger mapping requirements, consider using\ndynamic lookups instead.\n\nMaps can include:\n- Simple code to name conversions: {\"US\": \"United States\"}\n- Status transformations: {\"A\": \"Active\", \"I\": \"Inactive\"}\n- ID to name mappings: {\"100\": \"Marketing\", \"200\": \"Sales\"}\n\nValues can be strings, numbers, or booleans, but all are stored as strings\nin the configuration.\n"},"_lookupCacheId":{"type":"string","description":"Reference to a LookupCache resource that contains the reference data for the lookup.\n\n**Purpose**\n\nThis field connects the lookup to an external data source that has been cached in the system.\nUnlike static lookups that use the `map` property, dynamic lookups can reference large datasets\nor frequently changing information without requiring constant updates to the integration.\n\n**Implementation details**\n\nThe LookupCache resource referenced by this ID contains:\n- The data records to be used as a reference source\n- Configuration for how the data should be indexed and accessed\n- Caching parameters to balance performance with data freshness\n\n**Usage patterns**\n\nCommonly used to reference:\n- Product catalogs or SKU databases\n- Customer or account information\n- Pricing tables or discount rules\n- Complex business logic lookup tables\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n","format":"objectid"},"extract":{"type":"string","description":"JSON path expression that extracts a specific value from the cached lookup object.\n\n**Purpose**\n\nWhen using dynamic lookups with a LookupCache, this JSON path identifies which field to extract\nfrom the cached object after it has been retrieved using the lookup key.\n\n**Implementation details**\n\n- Must use JSON path syntax (similar to mapping extract fields)\n- Operates on the cached object returned by the lookup operation\n- Examples:\n  - \"$.name\" - Extract the name field from the top level\n  - \"$.details.price\" - Extract a nested price field\n  - \"$.attributes[0].value\" - Extract a value from the first element of an array\n\n**Usage scenario**\n\nWhen a lookup cache contains complex objects:\n```json\n// Cache entry for key \"PROD-123\":\n{\n  \"id\": \"PROD-123\",\n  \"name\": \"Premium Widget\",\n  \"details\": {\n    \"price\": 99.99,\n    \"currency\": \"USD\",\n    \"inStock\": true\n  }\n}\n```\n\nSetting extract to \"$.details.price\" would return 99.99 as the lookup result.\n\nIf no extract is provided, the entire cached object is returned as the lookup result.\n"},"default":{"type":"string","description":"Default value to use when the source value is not found in the lookup map.\n\nThis value is used as a fallback when:\n1. The source value doesn't match any key in the map\n2. allowFailures is set to true\n\nSetting an appropriate default helps prevent flow failures due to unexpected\nvalues and provides predictable behavior for edge cases.\n\nCommon default patterns include:\n- Descriptive unknowns: \"Unknown Country\", \"Unspecified Status\"\n- Original value indicators: \"{Original Value}\", \"No mapping found\"\n- Neutral values: \"Other\", \"N/A\", \"Miscellaneous\"\n\nIf allowFailures is false and no default is specified, the flow will fail\nwhen encountering unmapped values.\n"},"allowFailures":{"type":"boolean","description":"When true, missing lookup values will use the default value rather than causing an error.\n\n**Behavior control**\n\nThis field determines how the system handles source values that don't exist in the map:\n\n- true: Use the default value for missing mappings and continue processing\n- false: Treat missing mappings as errors, failing the record\n\n**Recommendation**\n\nSet this to true when:\n- New source values might appear over time\n- Data quality issues could introduce unexpected values\n- Processing should continue even with imperfect mapping\n\nSet this to false when:\n- Complete data accuracy is critical\n- All possible source values are known and controlled\n- Missing mappings indicate serious data problems that should be addressed\n\nThe best practice is typically to set allowFailures to true with a meaningful\ndefault value, so flows remain operational while alerting you to missing mappings.\n"}}}},"Output":{"type":"object","description":"Configuration for the tool's output processing.\n\nDefines how the tool's results are mapped, transformed, and enriched\nbefore being returned. Supports field mappings, lookups for data\nenrichment, and custom script hooks for pre/post-mapping processing.\n","properties":{"name":{"type":"string","maxLength":200,"description":"Display name for the output configuration.\n"},"description":{"type":"string","maxLength":10240,"description":"Description of the output data and its purpose.\n"},"schema":{"type":"object","description":"JSON Schema describing the output data structure.\n\nUsed for documentation and validation of the tool's output.\nMust be a valid JSON Schema document.\n","additionalProperties":true},"mappings":{"type":"array","description":"Field mappings to transform data into the output format.\n\nMaps data from processing results to the output structure.\nUses Celigo's standard mapping format with extract/generate field paths.\n","items":{"$ref":"#/components/schemas/Mappings"}},"lookups":{"type":"array","description":"Lookup tables for data enrichment during output processing.\n\nStatic key-value mappings used to translate values (e.g., status codes,\ncategory names) during output generation.\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Name of the lookup, used to reference it from mappings.\n"},"map":{"type":"object","description":"Key-value mapping object. Keys are the input values and\nvalues are the corresponding output values.\n","additionalProperties":true},"default":{"type":"string","description":"Default value returned when the input key is not found in the map.\n"},"allowFailures":{"type":"boolean","description":"Whether to continue processing if the lookup fails to find a match\nand no default is provided.\n"}},"required":["name"]}},"hooks":{"type":"object","description":"Custom script hooks for pre- and post-mapping processing.\n\nAllows running custom JavaScript functions before and after\noutput mappings are applied.\n","properties":{"preMap":{"type":"object","description":"Script to run before applying output mappings.\n\nCan modify the data before it is mapped to the output structure.\n","properties":{"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource"},"function":{"type":"string","description":"Function name to execute within the script"}}},"postMap":{"type":"object","description":"Script to run after applying output mappings.\n\nCan modify the final output data after mappings are applied.\n","properties":{"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource"},"function":{"type":"string","description":"Function name to execute within the script"}}}}},"mockInput":{"type":"object","description":"Mock data for testing the tool's output processing.\n\nProvides sample data that would arrive from the routing/processing\nstage, used to test mapping and lookup logic. Maximum size: 1MB.\n","additionalProperties":true}}},"Router":{"type":"object","description":"Configuration for conditional routing within a tool.\n\nRouters evaluate input data and direct it to different processing branches\nbased on criteria. This enables complex business logic and conditional\nprocessing within the tool.\n\nUnlike flows, tools only support \"first_matching_branch\" routing strategy.\nBranches can chain to other routers or use the special \"outputRouter\"\nterminal sink to exit the tool and return results.\n","properties":{"id":{"type":"string","description":"Unique identifier for this router within the tool.\n\nUsed to reference this router from other routers' branch `nextRouterId`.\n"},"name":{"type":"string","maxLength":300,"description":"Human-readable name for the router.\n"},"routeRecordsTo":{"type":"string","enum":["first_matching_branch"],"description":"Routing strategy. Tools only support \"first_matching_branch\",\nwhich routes to the first branch whose criteria match the input.\n"},"routeRecordsUsing":{"type":"string","enum":["input_filters","script"],"description":"Method used to evaluate routing criteria.\n\n- **input_filters**: Use declarative filter expressions on each branch\n- **script**: Use a custom JavaScript function to determine the branch\n"},"script":{"type":"object","description":"Script configuration when routeRecordsUsing is \"script\".\n\nThe function should return the name of the branch to route to.\n","properties":{"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource"},"function":{"type":"string","description":"Function name that returns the branch name"}}},"branches":{"type":"array","description":"List of branches defining different processing paths.\n\nEach branch has optional filter criteria and a set of processing steps.\nRecords are evaluated against branch criteria in order; the first\nmatching branch is selected.\n","items":{"type":"object","properties":{"name":{"type":"string","maxLength":300,"description":"Name of this branch.\n"},"description":{"type":"string","maxLength":10240,"description":"Description of when and why this branch is selected.\n"},"inputFilter":{"type":"object","description":"Filter criteria to determine if this branch should be selected.\n\nUses Celigo's expression-based filter format.\n","properties":{"version":{"type":"string","enum":["1"],"description":"Filter version"},"rules":{"type":"array","description":"Filter rules in Celigo expression-based filter format.\n\nArray-based DSL where the first element is an operator (e.g., \"equals\", \"and\", \"or\"),\nfollowed by operands which can be nested expressions.\n","items":{}}}},"nextRouterId":{"type":"string","description":"Identifier of the next router to chain to after this branch completes.\n\nUse \"outputRouter\" as a special terminal value to exit the tool\nand return the processing results.\n"},"pageProcessors":{"type":"array","description":"Processing steps to execute in this branch.\n\nEach processor references an export (lookup) or import resource\nfor data retrieval or submission.\n","items":{"type":"object","properties":{"type":{"type":"string","enum":["export","import"],"description":"Type of processor.\n\n- **export**: Retrieves data from an external system (lookup)\n- **import**: Sends data to an external system\n"},"_exportId":{"type":"string","format":"objectId","description":"Export resource reference (when type is \"export\")"},"_importId":{"type":"string","format":"objectId","description":"Import resource reference (when type is \"import\")"},"proceedOnFailure":{"type":"boolean","description":"Whether to continue processing subsequent steps if this\nprocessor fails.\n"},"responseMapping":{"type":"object","description":"Mapping configuration for the processor's response data.\n","properties":{"fields":{"type":"array","description":"Simple field-level mappings","items":{"type":"object","properties":{"extract":{"type":"string","description":"Source field path from the response"},"generate":{"type":"string","description":"Target field path in the record"}}}},"lists":{"type":"array","description":"List-level mappings for array data","items":{"type":"object","properties":{"generate":{"type":"string","description":"Target list path"},"fields":{"type":"array","items":{"type":"object","properties":{"extract":{"type":"string","description":"Source field path"},"generate":{"type":"string","description":"Target field path"}}}}}}}}},"hooks":{"type":"object","description":"Custom scripts for processing","properties":{"postResponseMap":{"type":"object","description":"Script to run after response mapping","properties":{"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource"},"function":{"type":"string","description":"Function name to execute"}}}}}}}}}}}},"required":["id","branches"]},"AIDescription":{"type":"object","description":"AI-generated descriptions and documentation for the resource.\n\nThis object contains automatically generated content that helps users\nunderstand the purpose, behavior, and configuration of the resource without\nrequiring them to analyze the technical details. The AI-generated content\nis sanitized and safe for display in the UI.\n","properties":{"summary":{"type":"string","description":"Brief AI-generated summary of the resource's purpose and functionality.\n\nThis concise description provides a quick overview of what the resource does,\nwhat systems it interacts with, and its primary role in the integration.\nThe summary is suitable for display in list views, dashboards, and other\ncontexts where space is limited.\n\nMaximum length: 10KB\n"},"detailed":{"type":"string","description":"Comprehensive AI-generated description of the resource's functionality.\n\nThis detailed explanation covers the resource's purpose, configuration details,\ndata flow patterns, filtering logic, and other technical aspects. It provides\nin-depth information suitable for documentation, tooltips, or detailed views\nin the administration interface.\n\nThe content may include HTML formatting for improved readability.\n\nMaximum length: 10KB\n"},"generatedOn":{"type":"string","format":"date-time","description":"Timestamp indicating when the AI description was generated.\n\nThis field helps track the freshness of the AI-generated content and\ndetermine when it might need to be regenerated due to changes in the\nresource's configuration or behavior.\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"}}},"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"404-not-found":{"description":"Not found. The requested resource does not exist or is not visible to the caller.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}}},"paths":{"/v1/tools/{_id}":{"get":{"summary":"Get a tool","description":"Returns the complete configuration of a specific tool.\n","operationId":"getToolById","tags":["Tools"],"parameters":[{"name":"_id","in":"path","description":"The unique identifier of the tool","required":true,"schema":{"type":"string","format":"objectId"}}],"responses":{"200":{"description":"Tool retrieved successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Response"}}}},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"$ref":"#/components/responses/404-not-found"}}}}}}
````

## Update a tool

> Updates an existing tool with the provided configuration.\
> This is used for major updates to a tool's structure or behavior.<br>

````json
{"openapi":"3.1.0","info":{"title":"Tools","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"Request":{"type":"object","description":"Request schema for creating or updating a tool.\n\nTools are reusable processing units that encapsulate input transformation,\nconditional routing, and output mapping logic within an integration.\n","properties":{"name":{"type":"string","maxLength":200,"description":"Human-readable name for the tool.\n\nDisplayed in the UI and used to identify the tool's purpose.\n"},"description":{"type":"string","maxLength":10240,"description":"Optional detailed description of what the tool does.\n\nUse this to document the tool's purpose, expected inputs/outputs,\nand any special considerations.\n"},"_integrationId":{"type":"string","format":"objectId","description":"Reference to the integration this tool belongs to.\n\nEvery tool must be associated with an integration. The integration\ndetermines the scope and access controls for the tool.\n"},"input":{"$ref":"#/components/schemas/Input"},"output":{"$ref":"#/components/schemas/Output"},"routers":{"type":"array","description":"Optional routers for conditional processing logic.\n\nRouters allow you to direct input data to different processing branches\nbased on filter criteria or script logic. Tools only support\n\"first_matching_branch\" routing strategy.\n\nBranches can chain to other routers or use the special \"outputRouter\"\nterminal value to exit the tool.\n","items":{"$ref":"#/components/schemas/Router"}},"aiDescription":{"$ref":"#/components/schemas/AIDescription"}},"required":["name","_integrationId"]},"Input":{"type":"object","description":"Configuration for the tool's input processing.\n\nDefines the expected input structure, optional transformations to apply\nbefore routing, and mock data for testing.\n","properties":{"name":{"type":"string","maxLength":200,"description":"Display name for the input configuration.\n"},"description":{"type":"string","maxLength":10240,"description":"Description of the expected input data and its purpose.\n"},"schema":{"type":"object","description":"JSON Schema describing the expected input data structure.\n\nUsed for validation, documentation, and AI-assisted tooling.\nMust be a valid JSON Schema document.\n","additionalProperties":true},"transform":{"$ref":"#/components/schemas/Transform"},"mockInput":{"type":"object","description":"Mock data for testing the tool's input processing.\n\nProvides sample input to test transformation logic and routing\nwithout requiring live data. Maximum size: 1MB.\n","additionalProperties":true}}},"Transform":{"type":"object","description":"Configuration for transforming data during processing operations. This object enables\nreshaping of records.\n\n**Transformation capabilities**\n\nCeligo's transformation engine offers powerful features for data manipulation:\n- Precise field mapping with JSONPath expressions\n- Support for any level of nested arrays\n- Formula-based field value generation\n- Dynamic references to flow and integration settings\n\n**Implementation approaches**\n\nThere are two distinct transformation mechanisms available:\n\n**Rule-Based Transformation (`type: \"expression\"`)**\n- **Best For**: Most transformation scenarios from simple to complex\n- **Capabilities**: Field mapping, formula calculations, lookups, nested data handling\n- **Advantages**: Visual configuration, no coding required, intuitive interface\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear mapping requirements or need to reshape data structure\n\n**Script-Based Transformation (`type: \"script\"`)**\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Capabilities**: Full programmatic control, custom processing, complex business rules\n- **Advantages**: Maximum flexibility, can implement any transformation logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Visual transformation tools aren't sufficient for your use case\n","properties":{"type":{"type":"string","description":"Determines which transformation mechanism to use. This choice affects which properties\nmust be configured and how transformation logic is implemented.\n\n**Available types**\n\n**Rule-Based Transformation (`\"expression\"`)**\n- **Required Config**: The `expression` object with mapping definitions\n- **Behavior**: Applies declarative rules to reshape data\n- **Best For**: Most transformation scenarios from simple to complex\n- **Advantages**: Visual configuration, no coding required\n\n**Script-Based Transformation (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to transform data\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard data transformations, use `\"expression\"`\n2. For complex logic or specialized processing, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based transformations. This object enables reshaping data\nwithout requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define transformation rules that\ncan map, modify, and generate data elements.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Field mappings define how input data is transformed to target fields\n- Formulas can be used to calculate or generate new values\n- Lookups can enrich data by fetching related information\n- Mode determines how records are processed (create new or modify existing)\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"2\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"2\"\nfor current implementations.\n","enum":["2"]},"rulesTwoDotZero":{"type":"object","description":"Configuration for version 2 transformation rules. This object contains the core logic\nfor how data is mapped, enriched, and transformed.\n\n**Capabilities**\n\nTransformation 2.0 provides:\n- Precise field mapping with JSONPath expressions\n- Support for deeply nested data structures\n- Formula-based field generation\n- Dynamic lookups for data enrichment\n- Multiple operating modes to fit different scenarios\n","properties":{"mode":{"type":"string","description":"Transformation mode that determines how records are handled during processing.\n\n**Available modes**\n\n**Create Mode (`\"create\"`)**\n- **Behavior**: Builds entirely new output records from inputs\n- **Use When**: Output structure differs significantly from input\n- **Advantage**: Clean slate approach, no field inheritance\n\n**Modify Mode (`\"modify\"`)**\n- **Behavior**: Makes targeted edits to existing records\n- **Use When**: Output structure should remain similar to input\n- **Advantage**: Preserves unmapped fields from the original record\n","enum":["create","modify"]},"mappings":{"$ref":"#/components/schemas/Mappings"},"lookups":{"allOf":[{"description":"Shared lookup tables used across all mappings defined in the transformation rules.\n\n**Purpose**\n\nLookups provide centralized value translation that can be referenced from any mapping\nin your transformation configuration. They enable consistent translation of codes, IDs,\nand values between systems without duplicating translation logic.\n\n**Usage in transformations**\n\nLookups are particularly valuable in transformations for:\n\n- **Data Normalization**: Standardizing values from diverse source systems\n- **Code Translation**: Converting between different coding systems (e.g., status codes)\n- **Field Enrichment**: Adding descriptive values based on ID or code lookups\n- **Cross-Reference Resolution**: Mapping identifiers between integrated systems\n\n**Implementation**\n\nLookups are defined once in this array and referenced by name in mappings:\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"statusMapping\",\n    \"map\": {\n      \"A\": \"Active\",\n      \"I\": \"Inactive\",\n      \"P\": \"Pending\"\n    },\n    \"default\": \"Unknown Status\"\n  }\n]\n```\n\nThen referenced in mappings using the lookupName property:\n\n```json\n{\n  \"generate\": \"status\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.statusCode\",\n  \"lookupName\": \"statusMapping\"\n}\n```\n\nThe system automatically applies the lookup during transformation processing.\n\nFor complete details on lookup properties and behavior, see the Lookups schema.\n"},{"$ref":"#/components/schemas/Lookups"}]}}}}},"script":{"type":"object","description":"Configuration for programmable script-based transformations. This object enables complex, custom\ntransformation logic beyond what expression-based transformations can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to transform data according to\nspecialized business rules or complex algorithms.\n\n**Implementation approach**\n\nScript-based transformation works by:\n1. Executing the specified function from the referenced script\n2. Passing input data to the function\n3. Using the function's return value as the transformed output\n\n**Common use cases**\n\nScript transformation is ideal for:\n- Complex business logic that can't be expressed through mappings\n- Algorithmic transformations requiring computation\n- Dynamic transformations based on external factors\n- Legacy system data format compatibility\n- Multi-stage processing with intermediate steps\n\nOnly use script-based transformation when expression-based transformation is insufficient.\nScript transformation requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the transformation logic.\n\nThe referenced script should contain the function specified in the\n'function' property.\n","format":"objectid"},"function":{"type":"string","description":"Name of the function within the script to execute for transformation. This function\nmust exist in the script referenced by _scriptId.\n"}}}}},"Mappings":{"type":"array","description":"Array of field mapping configurations for transforming data from one format into another.\n\n**Guidance**\n\nThis schema is designed around RECURSION as its core architectural principle. Understanding this recursive\nnature is essential for building effective mappings:\n\n1. The schema is self-referential by design - a mapping can contain nested mappings of the same structure\n2. Complex data structures (nested objects, arrays of objects, arrays of arrays of objects) are ALL\n   handled through this recursive pattern\n3. Each mapping handles one level of the data structure; deeper levels are handled by nested mappings\n\nWhen generating mappings programmatically:\n- For simple fields (string, number, boolean): Create single mapping objects\n- For objects: Create a parent mapping with nested 'mappings' array containing child field mappings\n- For arrays: Use 'buildArrayHelper' with extract paths defining array inputs and\n  recursive 'mappings' to define object structures\n\nThe system will process these nested structures recursively during runtime, ensuring proper construction\nof complex hierarchical data while maintaining excellent performance.\n","items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}}},"items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}},"Lookups":{"type":"array","description":"Configuration for value-to-value transformations using lookup tables.\n\n**Purpose**\n\nLookups provide a way to translate values from one system to another. They transform\ninput values into output values using either static mapping tables or\ndynamic lookup caches.\n\n**Lookup mechanisms**\n\nThere are two distinct lookup mechanisms available:\n\n1. **Static Lookups**: Define a simple key-value map object and store it as part of your resource\n   - Best for: Small, fixed sets of values that rarely change\n   - Implementation: Configure the `map` object with input-to-output value mappings\n   - Example: Country codes, status values, simple translations\n\n2. **Dynamic Lookups**: Reference an existing 'Lookup Cache' resource in your Celigo account\n   - Best for: Large datasets, frequently changing values, or complex reference data\n   - Implementation: Configure `_lookupCacheId` to reference cached data maintained independently\n   - Example: Product catalogs, customer databases, pricing information\n\n**Property usage**\n\nThere are two mutually exclusive ways to configure lookups, depending on which mechanism you choose:\n\n1. **For Static Mappings**: Configure the `map` property with a direct key-value object\n   ```json\n   \"map\": {\"US\": \"United States\", \"CA\": \"Canada\"}\n   ```\n\n2. **For Dynamic Lookups**: Configure the following properties:\n   - `_lookupCacheId`: Reference to the lookup cache resource\n   - `extract`: JSON path to extract specific value from the returned lookup object\n\n**When to use**\n\nLookups are ideal for:\n\n1. **Value Translation**: Mapping codes or IDs to human-readable values\n\n2. **Data Enrichment**: Adding related information to records during processing\n\n3. **Normalization**: Ensuring consistent formatting of values across systems\n\n**Implementation details**\n\nLookups can be referenced in:\n\n1. **Field Mappings**: Direct use in field transformation configurations\n\n2. **Handlebars Templates**: Use within templates with the syntax:\n   ```\n   {{lookup 'lookupName' record.fieldName}}\n   ```\n\n**Example usage**\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"countryCodeToName\",\n    \"map\": {\n      \"US\": \"United States\",\n      \"CA\": \"Canada\",\n      \"UK\": \"United Kingdom\"\n    },\n    \"default\": \"Unknown Country\",\n    \"allowFailures\": true\n  },\n  {\n    \"name\": \"productDetails\",\n    \"_lookupCacheId\": \"60a2c4e6f321d800129a1a3c\",\n    \"extract\": \"$.details.price\",\n    \"allowFailures\": false\n  }\n]\n```\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique identifier for the lookup table within this configuration.\n\nThis name must be unique within the scope where the lookup is defined and is used to reference\nthe lookup in handlebars templates with the syntax {{lookup 'name' value}}.\n\nChoose descriptive names that indicate the transformation purpose, such as:\n- \"countryCodeToName\" for country code to full name conversion\n- \"statusMapping\" for status code translations\n- \"departmentCodes\" for department code to name mapping\n"},"map":{"type":"object","description":"The lookup mapping table as key-value pairs.\n\nThis object contains the input values as keys and their corresponding\noutput values. When a input value matches a key in this object,\nit will be replaced with the corresponding value.\n\nThe map should be kept to a reasonable size (typically under 100 entries)\nfor optimal performance. For larger mapping requirements, consider using\ndynamic lookups instead.\n\nMaps can include:\n- Simple code to name conversions: {\"US\": \"United States\"}\n- Status transformations: {\"A\": \"Active\", \"I\": \"Inactive\"}\n- ID to name mappings: {\"100\": \"Marketing\", \"200\": \"Sales\"}\n\nValues can be strings, numbers, or booleans, but all are stored as strings\nin the configuration.\n"},"_lookupCacheId":{"type":"string","description":"Reference to a LookupCache resource that contains the reference data for the lookup.\n\n**Purpose**\n\nThis field connects the lookup to an external data source that has been cached in the system.\nUnlike static lookups that use the `map` property, dynamic lookups can reference large datasets\nor frequently changing information without requiring constant updates to the integration.\n\n**Implementation details**\n\nThe LookupCache resource referenced by this ID contains:\n- The data records to be used as a reference source\n- Configuration for how the data should be indexed and accessed\n- Caching parameters to balance performance with data freshness\n\n**Usage patterns**\n\nCommonly used to reference:\n- Product catalogs or SKU databases\n- Customer or account information\n- Pricing tables or discount rules\n- Complex business logic lookup tables\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n","format":"objectid"},"extract":{"type":"string","description":"JSON path expression that extracts a specific value from the cached lookup object.\n\n**Purpose**\n\nWhen using dynamic lookups with a LookupCache, this JSON path identifies which field to extract\nfrom the cached object after it has been retrieved using the lookup key.\n\n**Implementation details**\n\n- Must use JSON path syntax (similar to mapping extract fields)\n- Operates on the cached object returned by the lookup operation\n- Examples:\n  - \"$.name\" - Extract the name field from the top level\n  - \"$.details.price\" - Extract a nested price field\n  - \"$.attributes[0].value\" - Extract a value from the first element of an array\n\n**Usage scenario**\n\nWhen a lookup cache contains complex objects:\n```json\n// Cache entry for key \"PROD-123\":\n{\n  \"id\": \"PROD-123\",\n  \"name\": \"Premium Widget\",\n  \"details\": {\n    \"price\": 99.99,\n    \"currency\": \"USD\",\n    \"inStock\": true\n  }\n}\n```\n\nSetting extract to \"$.details.price\" would return 99.99 as the lookup result.\n\nIf no extract is provided, the entire cached object is returned as the lookup result.\n"},"default":{"type":"string","description":"Default value to use when the source value is not found in the lookup map.\n\nThis value is used as a fallback when:\n1. The source value doesn't match any key in the map\n2. allowFailures is set to true\n\nSetting an appropriate default helps prevent flow failures due to unexpected\nvalues and provides predictable behavior for edge cases.\n\nCommon default patterns include:\n- Descriptive unknowns: \"Unknown Country\", \"Unspecified Status\"\n- Original value indicators: \"{Original Value}\", \"No mapping found\"\n- Neutral values: \"Other\", \"N/A\", \"Miscellaneous\"\n\nIf allowFailures is false and no default is specified, the flow will fail\nwhen encountering unmapped values.\n"},"allowFailures":{"type":"boolean","description":"When true, missing lookup values will use the default value rather than causing an error.\n\n**Behavior control**\n\nThis field determines how the system handles source values that don't exist in the map:\n\n- true: Use the default value for missing mappings and continue processing\n- false: Treat missing mappings as errors, failing the record\n\n**Recommendation**\n\nSet this to true when:\n- New source values might appear over time\n- Data quality issues could introduce unexpected values\n- Processing should continue even with imperfect mapping\n\nSet this to false when:\n- Complete data accuracy is critical\n- All possible source values are known and controlled\n- Missing mappings indicate serious data problems that should be addressed\n\nThe best practice is typically to set allowFailures to true with a meaningful\ndefault value, so flows remain operational while alerting you to missing mappings.\n"}}}},"Output":{"type":"object","description":"Configuration for the tool's output processing.\n\nDefines how the tool's results are mapped, transformed, and enriched\nbefore being returned. Supports field mappings, lookups for data\nenrichment, and custom script hooks for pre/post-mapping processing.\n","properties":{"name":{"type":"string","maxLength":200,"description":"Display name for the output configuration.\n"},"description":{"type":"string","maxLength":10240,"description":"Description of the output data and its purpose.\n"},"schema":{"type":"object","description":"JSON Schema describing the output data structure.\n\nUsed for documentation and validation of the tool's output.\nMust be a valid JSON Schema document.\n","additionalProperties":true},"mappings":{"type":"array","description":"Field mappings to transform data into the output format.\n\nMaps data from processing results to the output structure.\nUses Celigo's standard mapping format with extract/generate field paths.\n","items":{"$ref":"#/components/schemas/Mappings"}},"lookups":{"type":"array","description":"Lookup tables for data enrichment during output processing.\n\nStatic key-value mappings used to translate values (e.g., status codes,\ncategory names) during output generation.\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Name of the lookup, used to reference it from mappings.\n"},"map":{"type":"object","description":"Key-value mapping object. Keys are the input values and\nvalues are the corresponding output values.\n","additionalProperties":true},"default":{"type":"string","description":"Default value returned when the input key is not found in the map.\n"},"allowFailures":{"type":"boolean","description":"Whether to continue processing if the lookup fails to find a match\nand no default is provided.\n"}},"required":["name"]}},"hooks":{"type":"object","description":"Custom script hooks for pre- and post-mapping processing.\n\nAllows running custom JavaScript functions before and after\noutput mappings are applied.\n","properties":{"preMap":{"type":"object","description":"Script to run before applying output mappings.\n\nCan modify the data before it is mapped to the output structure.\n","properties":{"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource"},"function":{"type":"string","description":"Function name to execute within the script"}}},"postMap":{"type":"object","description":"Script to run after applying output mappings.\n\nCan modify the final output data after mappings are applied.\n","properties":{"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource"},"function":{"type":"string","description":"Function name to execute within the script"}}}}},"mockInput":{"type":"object","description":"Mock data for testing the tool's output processing.\n\nProvides sample data that would arrive from the routing/processing\nstage, used to test mapping and lookup logic. Maximum size: 1MB.\n","additionalProperties":true}}},"Router":{"type":"object","description":"Configuration for conditional routing within a tool.\n\nRouters evaluate input data and direct it to different processing branches\nbased on criteria. This enables complex business logic and conditional\nprocessing within the tool.\n\nUnlike flows, tools only support \"first_matching_branch\" routing strategy.\nBranches can chain to other routers or use the special \"outputRouter\"\nterminal sink to exit the tool and return results.\n","properties":{"id":{"type":"string","description":"Unique identifier for this router within the tool.\n\nUsed to reference this router from other routers' branch `nextRouterId`.\n"},"name":{"type":"string","maxLength":300,"description":"Human-readable name for the router.\n"},"routeRecordsTo":{"type":"string","enum":["first_matching_branch"],"description":"Routing strategy. Tools only support \"first_matching_branch\",\nwhich routes to the first branch whose criteria match the input.\n"},"routeRecordsUsing":{"type":"string","enum":["input_filters","script"],"description":"Method used to evaluate routing criteria.\n\n- **input_filters**: Use declarative filter expressions on each branch\n- **script**: Use a custom JavaScript function to determine the branch\n"},"script":{"type":"object","description":"Script configuration when routeRecordsUsing is \"script\".\n\nThe function should return the name of the branch to route to.\n","properties":{"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource"},"function":{"type":"string","description":"Function name that returns the branch name"}}},"branches":{"type":"array","description":"List of branches defining different processing paths.\n\nEach branch has optional filter criteria and a set of processing steps.\nRecords are evaluated against branch criteria in order; the first\nmatching branch is selected.\n","items":{"type":"object","properties":{"name":{"type":"string","maxLength":300,"description":"Name of this branch.\n"},"description":{"type":"string","maxLength":10240,"description":"Description of when and why this branch is selected.\n"},"inputFilter":{"type":"object","description":"Filter criteria to determine if this branch should be selected.\n\nUses Celigo's expression-based filter format.\n","properties":{"version":{"type":"string","enum":["1"],"description":"Filter version"},"rules":{"type":"array","description":"Filter rules in Celigo expression-based filter format.\n\nArray-based DSL where the first element is an operator (e.g., \"equals\", \"and\", \"or\"),\nfollowed by operands which can be nested expressions.\n","items":{}}}},"nextRouterId":{"type":"string","description":"Identifier of the next router to chain to after this branch completes.\n\nUse \"outputRouter\" as a special terminal value to exit the tool\nand return the processing results.\n"},"pageProcessors":{"type":"array","description":"Processing steps to execute in this branch.\n\nEach processor references an export (lookup) or import resource\nfor data retrieval or submission.\n","items":{"type":"object","properties":{"type":{"type":"string","enum":["export","import"],"description":"Type of processor.\n\n- **export**: Retrieves data from an external system (lookup)\n- **import**: Sends data to an external system\n"},"_exportId":{"type":"string","format":"objectId","description":"Export resource reference (when type is \"export\")"},"_importId":{"type":"string","format":"objectId","description":"Import resource reference (when type is \"import\")"},"proceedOnFailure":{"type":"boolean","description":"Whether to continue processing subsequent steps if this\nprocessor fails.\n"},"responseMapping":{"type":"object","description":"Mapping configuration for the processor's response data.\n","properties":{"fields":{"type":"array","description":"Simple field-level mappings","items":{"type":"object","properties":{"extract":{"type":"string","description":"Source field path from the response"},"generate":{"type":"string","description":"Target field path in the record"}}}},"lists":{"type":"array","description":"List-level mappings for array data","items":{"type":"object","properties":{"generate":{"type":"string","description":"Target list path"},"fields":{"type":"array","items":{"type":"object","properties":{"extract":{"type":"string","description":"Source field path"},"generate":{"type":"string","description":"Target field path"}}}}}}}}},"hooks":{"type":"object","description":"Custom scripts for processing","properties":{"postResponseMap":{"type":"object","description":"Script to run after response mapping","properties":{"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource"},"function":{"type":"string","description":"Function name to execute"}}}}}}}}}}}},"required":["id","branches"]},"AIDescription":{"type":"object","description":"AI-generated descriptions and documentation for the resource.\n\nThis object contains automatically generated content that helps users\nunderstand the purpose, behavior, and configuration of the resource without\nrequiring them to analyze the technical details. The AI-generated content\nis sanitized and safe for display in the UI.\n","properties":{"summary":{"type":"string","description":"Brief AI-generated summary of the resource's purpose and functionality.\n\nThis concise description provides a quick overview of what the resource does,\nwhat systems it interacts with, and its primary role in the integration.\nThe summary is suitable for display in list views, dashboards, and other\ncontexts where space is limited.\n\nMaximum length: 10KB\n"},"detailed":{"type":"string","description":"Comprehensive AI-generated description of the resource's functionality.\n\nThis detailed explanation covers the resource's purpose, configuration details,\ndata flow patterns, filtering logic, and other technical aspects. It provides\nin-depth information suitable for documentation, tooltips, or detailed views\nin the administration interface.\n\nThe content may include HTML formatting for improved readability.\n\nMaximum length: 10KB\n"},"generatedOn":{"type":"string","format":"date-time","description":"Timestamp indicating when the AI description was generated.\n\nThis field helps track the freshness of the AI-generated content and\ndetermine when it might need to be regenerated due to changes in the\nresource's configuration or behavior.\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"}}},"Response":{"type":"object","description":"Response schema for tool operations.\n\nContains the complete tool configuration including metadata, input/output\nsettings, routing logic, and AI-generated descriptions.\n","properties":{"_id":{"type":"string","format":"objectId","readOnly":true,"description":"Unique identifier for the tool"},"name":{"type":"string","description":"Human-readable name for the tool"},"description":{"type":"string","description":"Detailed description of the tool's purpose"},"_integrationId":{"type":"string","format":"objectId","description":"Reference to the parent integration"},"input":{"$ref":"#/components/schemas/Input"},"output":{"$ref":"#/components/schemas/Output"},"routers":{"type":"array","description":"Routing configuration for conditional processing.\n\nOnly present when the tool has routers configured.\n","items":{"$ref":"#/components/schemas/Router"}},"aiDescription":{"$ref":"#/components/schemas/AIDescription"},"createdAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the tool was created"},"lastModified":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the tool was last modified"},"deletedAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the tool was soft-deleted.\n\nOnly present for deleted tools. The tool will be permanently\nremoved 30 days after this timestamp.\n"}}},"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}},"responses":{"400-bad-request":{"description":"Bad request. The server could not understand the request because of malformed syntax or invalid parameters.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}},"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"404-not-found":{"description":"Not found. The requested resource does not exist or is not visible to the caller.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}},"422-unprocessable-entity":{"description":"Unprocessable entity. The request was well-formed but was unable to be followed due to semantic errors.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}}},"paths":{"/v1/tools/{_id}":{"put":{"summary":"Update a tool","description":"Updates an existing tool with the provided configuration.\nThis is used for major updates to a tool's structure or behavior.\n","operationId":"updateTool","tags":["Tools"],"parameters":[{"name":"_id","in":"path","description":"The unique identifier of the tool","required":true,"schema":{"type":"string","format":"objectId"}}],"requestBody":{"required":true,"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Request"}}}},"responses":{"200":{"description":"Tool updated successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Response"}}}},"400":{"$ref":"#/components/responses/400-bad-request"},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"$ref":"#/components/responses/404-not-found"},"422":{"$ref":"#/components/responses/422-unprocessable-entity"}}}}}}
````

## Delete a tool

> Deletes a tool. The tool is soft-deleted and retained for 30 days\
> before permanent removal.<br>

```json
{"openapi":"3.1.0","info":{"title":"Tools","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"404-not-found":{"description":"Not found. The requested resource does not exist or is not visible to the caller.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}},"schemas":{"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}}},"paths":{"/v1/tools/{_id}":{"delete":{"summary":"Delete a tool","description":"Deletes a tool. The tool is soft-deleted and retained for 30 days\nbefore permanent removal.\n","operationId":"deleteTool","tags":["Tools"],"parameters":[{"name":"_id","in":"path","description":"The unique identifier of the tool","required":true,"schema":{"type":"string","format":"objectId"}}],"responses":{"204":{"description":"Tool deleted successfully"},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"$ref":"#/components/responses/404-not-found"}}}}}}
```

## List connections a tool depends on

> Returns the full Connection resources the tool references — both directly\
> (via the tool's own \`\_connectionId\` fields on its steps) and transitively\
> through descendant resources (inner tools, lookups, imports, exports).\
> \
> Returns \`200\` with \`\[]\` when the tool has no connection dependencies.\
> \
> AI guidance:\
> \- Use this to discover \*what systems\* a tool talks to before cloning,\
> &#x20; moving, or evaluating blast radius of a connection change.\
> \- For the full dependency tree (imports, exports, nested tools), use\
> &#x20; \`GET /v1/tools/{\_id}/descendants\` instead — that endpoint returns the\
> &#x20; actual resource docs, grouped by type.

```json
{"openapi":"3.1.0","info":{"title":"Tools","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"Response-2":{"type":"object","description":"Complete connection object as returned by the API","allOf":[{"$ref":"#/components/schemas/Request-2"},{"$ref":"#/components/schemas/ResourceResponse"},{"$ref":"#/components/schemas/IAResourceResponse"},{"type":"object","properties":{"aiDescription":{"$ref":"#/components/schemas/AIDescription"},"salesforce":{"allOf":[{"$ref":"#/components/schemas/Salesforce"},{"type":"object","properties":{"info":{"type":"object","description":"Additional Salesforce connection information","readOnly":true}}}]},"netsuite":{"allOf":[{"$ref":"#/components/schemas/NetSuite"},{"type":"object","properties":{"suiteAppInstalled":{"type":"boolean","description":"Whether the Celigo SuiteApp is installed","readOnly":true}}}]},"_integrationId":{"type":"string","format":"objectId","description":"ID of the integration this connection belongs to","readOnly":true},"_connectorId":{"type":"string","format":"objectId","description":"ID of the connector template this connection is based on","readOnly":true},"_id":{"type":"string","format":"objectId","description":"Unique identifier for the connection","readOnly":true},"offline":{"type":"boolean","description":"Whether the connection is currently offline/disabled","readOnly":true},"_userId":{"type":"string","format":"objectId","description":"ID of the user who owns this connection","readOnly":true},"createdAt":{"type":"string","format":"date-time","description":"Timestamp when the connection was created","readOnly":true},"lastModified":{"type":"string","format":"date-time","description":"Timestamp when the connection was last modified","readOnly":true},"deletedAt":{"type":"string","format":"date-time","description":"Timestamp when the connection was soft-deleted (if applicable)","readOnly":true},"needsAuthorization":{"type":"boolean","description":"Whether this connection requires OAuth2 or similar authorization flow","readOnly":true},"isAmazonSPConnection":{"type":"boolean","description":"Whether this is an Amazon Selling Partner API connection","readOnly":true},"concurrencyInfo":{"type":"object","description":"Current concurrency usage and limits for this connection","readOnly":true,"properties":{"currentLevel":{"type":"number","description":"Current concurrency level being used"},"targetLevel":{"type":"number","description":"Target concurrency level"},"maxLevel":{"type":"number","description":"Maximum allowed concurrency level"},"borrowedFrom":{"type":"string","description":"ID of connection this is borrowing concurrency from"}}},"rateLimit":{"type":"object","description":"Rate limiting information for this connection","readOnly":true,"properties":{"isRecovering":{"type":"boolean","description":"Whether the connection is currently recovering from rate limits"},"lastErrorAt":{"type":"string","format":"date-time","description":"Timestamp of last rate limit error"},"retryCount":{"type":"number","description":"Number of retry attempts for rate limit recovery"}}},"status":{"type":"string","enum":["active","inactive","error","pending","offline"],"description":"Current operational status of the connection","readOnly":true},"lastPingAt":{"type":"string","format":"date-time","description":"Timestamp of last successful ping test","readOnly":true},"lastPingStatus":{"type":"string","enum":["success","failed","pending"],"description":"Result of the last ping test","readOnly":true},"capabilities":{"type":"object","description":"Discovered or configured capabilities of this connection","readOnly":true,"properties":{"read":{"type":"boolean","description":"Supports read operations"},"write":{"type":"boolean","description":"Supports write operations"},"delete":{"type":"boolean","description":"Supports delete operations"},"realtime":{"type":"boolean","description":"Supports real-time data streaming"},"batch":{"type":"boolean","description":"Supports batch operations"}}},"quotas":{"type":"object","description":"Usage quotas and limits for this connection","readOnly":true,"properties":{"daily":{"type":"object","properties":{"limit":{"type":"number"},"used":{"type":"number"},"remaining":{"type":"number"}}},"monthly":{"type":"object","properties":{"limit":{"type":"number"},"used":{"type":"number"},"remaining":{"type":"number"}}}}},"_sourceId":{"type":"string","format":"objectId","description":"ID of the source this connection was created from","readOnly":true},"_templateId":{"type":"string","format":"objectId","description":"ID of the template this connection was generated from","readOnly":true},"draft":{"type":"boolean","description":"Whether this connection is in draft state","readOnly":true},"draftExpiresAt":{"type":"string","format":"date-time","description":"Timestamp when the draft connection expires","readOnly":true},"debugUntil":{"type":"string","format":"date-time","description":"Timestamp until which debug logging is enabled","readOnly":true},"apiIdentifier":{"type":"string","description":"API identifier for this connection","readOnly":true}}}]},"Request-2":{"type":"object","description":"Fields that can be sent when creating or updating a connection","properties":{"name":{"type":"string","description":"Descriptive identifier for the connection resource in human-readable format.\n\nThis string serves as the primary display name for the connection across the application UI and is used in:\n- API responses when listing connections\n- Error and audit logs for traceability\n- Flow builder UI components\n- Integration configuration dashboards\n\nWhile not required to be globally unique in the system, using descriptive, unique names is strongly recommended\nfor clarity when managing multiple connections. The name should indicate the target system and purpose.\n\nMaximum length: 200 characters\nAllowed characters: Letters, numbers, spaces, and basic punctuation\n","maxLength":200},"type":{"type":"string","description":"The type of connection determining which authentication and connectivity options are available","enum":["netsuite","salesforce","ftp","s3","rest","wrapper","http","rdbms","mongodb","as2","filesystem","mcp","dynamodb","jdbc","van"]},"externalId":{"type":"string","description":"External identifier for the connection, often used for integration with third-party systems"},"assistant":{"type":"string","description":"Application name in lowercase for HTTP connections to systems with integrator.io adaptors.\nUsed to identify the target application being connected to.\nExamples - Shopify: \"shopify\", eBay: \"ebay\".\nOnly applicable for HTTP connection types.\n"},"_agentId":{"type":"string","format":"objectId","description":"Reference to a Celigo on-premise Agent. Required for connection types that need\nlocal network or filesystem access (JDBC, filesystem, Oracle RDBMS).\nThe agent establishes a secure tunnel between the on-premise environment and integrator.io.\n"},"_borrowConcurrencyFromConnectionId":{"type":"string","format":"objectId","description":"Reference to another connection to share concurrency limits with.\nWhen set, this connection's concurrency is counted against the referenced\nconnection's limit instead of maintaining its own.\n"},"debugDate":{"type":"string","format":"date-time","description":"Date until which debug logging is enabled for this connection"},"settingsForm":{"type":"object","description":"Dynamic form configuration for connection-specific settings"},"settings":{"type":"object","description":"Connection-specific settings and configurations"},"pgp":{"type":"object","description":"PGP encryption settings for file-based connections"},"ssl":{"$ref":"#/components/schemas/SSL"},"netsuite":{"$ref":"#/components/schemas/NetSuite"},"salesforce":{"$ref":"#/components/schemas/Salesforce"},"ftp":{"$ref":"#/components/schemas/FTP"},"s3":{"$ref":"#/components/schemas/S3"},"rest":{"$ref":"#/components/schemas/REST"},"http":{"$ref":"#/components/schemas/HTTP"},"rdbms":{"$ref":"#/components/schemas/RDBMS"},"mongodb":{"$ref":"#/components/schemas/MongoDB"},"as2":{"$ref":"#/components/schemas/AS2"},"filesystem":{"$ref":"#/components/schemas/Filesystem"},"dynamodb":{"$ref":"#/components/schemas/DynamoDB"},"jdbc":{"$ref":"#/components/schemas/JDBC"},"van":{"$ref":"#/components/schemas/VAN"},"mcp":{"$ref":"#/components/schemas/MCP"},"wrapper":{"$ref":"#/components/schemas/Wrapper"}},"required":["name","type"],"anyOf":[{"title":"NetSuite","required":["netsuite"],"properties":{"netsuite":{"$ref":"#/components/schemas/NetSuite"}}},{"title":"Salesforce","required":["salesforce"],"properties":{"salesforce":{"$ref":"#/components/schemas/Salesforce"}}},{"title":"FTP","required":["ftp"],"properties":{"ftp":{"$ref":"#/components/schemas/FTP"}}},{"title":"S3","required":["s3"],"properties":{"s3":{"$ref":"#/components/schemas/S3"}}},{"title":"REST","required":["rest"],"properties":{"rest":{"$ref":"#/components/schemas/REST"}}},{"title":"HTTP","required":["http"],"properties":{"http":{"$ref":"#/components/schemas/HTTP"}}},{"title":"RDBMS","required":["rdbms"],"properties":{"rdbms":{"$ref":"#/components/schemas/RDBMS"}}},{"title":"MongoDB","required":["mongodb"],"properties":{"mongodb":{"$ref":"#/components/schemas/MongoDB"}}},{"title":"AS2","required":["as2"],"properties":{"as2":{"$ref":"#/components/schemas/AS2"}}},{"title":"Filesystem","required":["filesystem"],"properties":{"filesystem":{"$ref":"#/components/schemas/Filesystem"}}},{"title":"MCP","required":["mcp"],"properties":{"mcp":{"$ref":"#/components/schemas/MCP"}}},{"title":"DynamoDB","required":["dynamodb"],"properties":{"dynamodb":{"$ref":"#/components/schemas/DynamoDB"}}},{"title":"JDBC","required":["jdbc"],"properties":{"jdbc":{"$ref":"#/components/schemas/JDBC"}}},{"title":"VAN","required":["van"],"properties":{"van":{"$ref":"#/components/schemas/VAN"}}},{"title":"Wrapper","required":["wrapper"],"properties":{"wrapper":{"$ref":"#/components/schemas/Wrapper"}}}]},"SSL":{"type":"object","description":"SSL/TLS certificate configuration for secure database connections.\n\nUsed by RDBMS and other connection types that support client certificate authentication\nor connections to servers with private CA-signed certificates.\n\n**Usage rules**\n- Provide cert + key together for client certificate (mutual TLS) authentication.\n- Provide ca alone when the server uses a certificate signed by a private CA.\n- cert/key and ca can be combined for mTLS with a private CA.\n- passphrase is only needed when the private key is encrypted.\n","properties":{"ca":{"type":"string","description":"Certificate Authority (CA) certificate in PEM format (encrypted at rest).\nProvide this when the database server uses a certificate signed by a private CA\nthat is not in the system's default trust store.\n","writeOnly":true},"key":{"type":"string","description":"Client private key in PEM format (encrypted at rest).\nREQUIRED alongside cert for client certificate (mTLS) authentication.\nCannot be provided without cert.\n","writeOnly":true},"passphrase":{"type":"string","description":"Passphrase to decrypt an encrypted private key (encrypted at rest).\nOnly needed when the PEM private key in the key field is password-protected.\n","writeOnly":true},"cert":{"type":"string","description":"Client certificate in PEM format (encrypted at rest).\nREQUIRED alongside key for client certificate (mTLS) authentication.\nCannot be provided without key.\n","writeOnly":true}}},"NetSuite":{"type":"object","description":"Configuration for NetSuite ERP connections. Used when the connection type is \"netsuite\".\n\nNetSuite connections support two authentication methods:\n- Token-Based Authentication (TBA) — recommended for production. Requires tokenId and tokenSecret.\n- Basic authentication (email/password) — legacy method, limited functionality.\n\n**Required fields by auth type**\n- authType \"token\": account, tokenId, tokenSecret, roleId\n- authType \"token-auto\": account (tokens are managed automatically via iClient)\n- authType \"basic\": account, email, password, roleId\n\n**Environment**\nThe environment field selects which NetSuite instance to connect to.\nThe account ID format changes based on environment (e.g., \"123456\" for production,\n\"123456_SB1\" for non-production).\n","properties":{"authType":{"type":"string","enum":["basic","token","token-auto"],"description":"Authentication method for the NetSuite connection.\n\n- \"token\" — Token-Based Authentication (TBA). Recommended for production.\n  Requires tokenId and tokenSecret from a NetSuite integration record.\n- \"token-auto\" — Automatic token management via Celigo's iClient.\n  Tokens are provisioned and rotated automatically.\n- \"basic\" — Legacy email/password authentication. Not recommended for\n  new connections; has limited API access compared to TBA.\n"},"account":{"type":"string","description":"NetSuite account ID. REQUIRED for all auth types.\n\nThis is the account identifier visible in NetSuite under Setup > Company > Company Information.\nThe value is automatically uppercased.\n\nFormat varies by environment:\n- Production: \"123456\" or \"TSTDRV123456\"\n- Non-production: \"123456_SB1\" (suffix indicates the environment number)\n- Beta: \"123456\" (same as production, but with environment set to \"beta\")\n"},"environment":{"type":"string","enum":["production","beta"],"description":"NetSuite environment to connect to.\n\n- \"production\" — Live production instance.\n- \"beta\" — NetSuite beta/release-preview environment.\n\nDefaults to \"production\" when not specified.\n"},"tokenId":{"type":"string","description":"NetSuite TBA token ID (encrypted at rest). REQUIRED when authType is \"token\".\n\nGenerated in NetSuite under Setup > Users/Roles > Access Tokens.\nMust be paired with the corresponding tokenSecret.\n","writeOnly":true},"tokenSecret":{"type":"string","description":"NetSuite TBA token secret (encrypted at rest). REQUIRED when authType is \"token\".\n\nGenerated alongside the tokenId in NetSuite. Treat as a sensitive credential.\n","writeOnly":true},"entityId":{"type":"string","description":"NetSuite entity/user ID associated with the token."},"tokenName":{"type":"string","description":"Human-readable name of the NetSuite access token for identification purposes."},"roleId":{"type":"string","description":"NetSuite role ID that determines the permissions for this connection.\n\nThe role controls which records, fields, and operations are accessible.\nMust match the role associated with the access token in NetSuite.\n"},"email":{"type":"string","description":"NetSuite user email address. REQUIRED when authType is \"basic\".\nUsed as the login credential for basic (email/password) authentication.\n","format":"email"},"password":{"type":"string","description":"NetSuite user password (encrypted at rest). REQUIRED when authType is \"basic\".","writeOnly":true},"requestLevelCredentials":{"type":"boolean","description":"When true, authentication credentials are sent with each individual API request\nrather than maintaining a persistent session. Enable this for environments\nwhere session-based auth is unreliable.\n"},"dataCenterURLs":{"type":"object","description":"NetSuite data center URL overrides. Normally auto-discovered from the account ID.\nOnly set this to override the default data center routing.\n"},"accountName":{"type":"string","description":"Human-readable NetSuite account name (display purposes only)."},"roleName":{"type":"string","description":"Human-readable name of the NetSuite role (display purposes only)."},"wsdlVersion":{"type":"string","description":"NetSuite WSDL (Web Services Description Language) version for SuiteTalk Web Services.\n\nControls which API version is used for SOAP-based operations. Newer versions\nmay include additional record types and fields.\n\n- \"latest\" — Use the most recent stable WSDL version.\n- \"next\" — Use the next (pre-release) WSDL version.\n- Specific versions like \"2023.2\" pin to that exact API version.\n","enum":["latest","next","2023.2","2023.1","2022.2","2022.1"],"default":"latest"},"applicationId":{"type":"string","description":"NetSuite application ID from the integration record.\nRequired for some authentication configurations to identify the calling application.\n"},"concurrencyLevelRESTlet":{"type":"number","description":"Maximum concurrent requests to NetSuite RESTlet endpoints.\nNetSuite enforces governance limits on concurrent RESTlet requests per account.\nKeep this low to avoid SuiteScript governance errors.\n","minimum":1,"maximum":10,"default":1},"concurrencyLevelWebServices":{"type":"number","description":"Maximum concurrent requests to NetSuite SuiteTalk Web Services (SOAP API).\nNetSuite enforces concurrency limits per account — exceeding them causes\n\"Only one request may be made against a session at a time\" errors.\n","minimum":1,"maximum":10,"default":5},"concurrencyLevel":{"type":"number","description":"General concurrency level for this connection. Controls the overall\nmaximum concurrent requests across all operation types.\n","minimum":1,"maximum":10,"default":5},"targetConcurrencyLevel":{"type":"number","description":"Target concurrency level for auto-scaling. The system adjusts concurrency\nbetween 1 and this value based on rate limit and governance feedback.\n","minimum":1,"maximum":10},"_iClientId":{"type":"string","format":"objectId","description":"ID of the iClient used for token-based authentication."}}},"Salesforce":{"type":"object","description":"Configuration for Salesforce CRM connections. Used when the connection type is \"salesforce\".\n\nSalesforce connections authenticate via OAuth 2.0 with two supported flows:\n- JWT Bearer Token — server-to-server auth using a Salesforce Connected App. Recommended for automated integrations.\n- Refresh Token — interactive OAuth flow where the user authorizes via browser. Good for user-context integrations.\n\n**Required fields by flow type**\n- oauth2FlowType \"jwtBearerToken\": username (Salesforce username of the authorized user)\n- oauth2FlowType \"refreshToken\": refreshToken (obtained via browser-based OAuth flow)\n\n**Environment selection**\nThe login endpoint defaults to login.salesforce.com for production orgs.\n","properties":{"baseURI":{"type":"string","description":"Custom Salesforce instance URL. Overrides the default login URL.\n\nNormally auto-discovered during OAuth authentication. Only set this to\nforce a specific instance URL (e.g., \"https://mycompany.my.salesforce.com\").\n","format":"uri"},"oauth2FlowType":{"type":"string","enum":["jwtBearerToken","refreshToken"],"description":"OAuth 2.0 authentication flow type.\n\n- \"jwtBearerToken\" — Server-to-server JWT Bearer Token flow. Does not require\n  user interaction. Requires a Salesforce Connected App with a digital certificate,\n  and the username field must be set to the authorized Salesforce user.\n- \"refreshToken\" — Authorization Code / Refresh Token flow. Requires initial\n  browser-based authorization, then uses the refresh token for ongoing access.\n\nChoose \"jwtBearerToken\" for automated server-to-server integrations.\nChoose \"refreshToken\" for integrations that operate in a specific user's context.\n"},"username":{"type":"string","description":"Salesforce username for JWT Bearer Token authentication.\nREQUIRED when oauth2FlowType is \"jwtBearerToken\".\n\nThis is the Salesforce login username (email) of the user whose permissions\nthe integration will operate under.\n"},"bearerToken":{"type":"string","description":"OAuth access/bearer token (encrypted at rest).\nTypically auto-managed by the system during OAuth flows. Rarely set manually.\n","writeOnly":true},"refreshToken":{"type":"string","description":"OAuth refresh token (encrypted at rest). REQUIRED when oauth2FlowType is \"refreshToken\".\nObtained during the initial browser-based OAuth authorization flow.\n","writeOnly":true},"packagedOAuth":{"type":"boolean","description":"When true, uses Celigo's pre-configured Connected App for OAuth.\nWhen false, you must provide your own Connected App credentials via iClient.\n","default":false},"scope":{"type":"array","items":{"type":"string"},"description":"OAuth scopes to request during authorization. Controls the level of access granted.\n\nCommon scopes:\n- \"full\" — Full access to all Salesforce APIs.\n- \"refresh_token\" — Allows obtaining refresh tokens for long-lived access.\n- \"api\" — Access to REST and SOAP APIs.\n\nDefaults to [\"full\", \"refresh_token\"] which provides complete API access.\n","default":["full","refresh_token"]},"concurrencyLevel":{"type":"number","description":"Maximum number of concurrent API requests to Salesforce.\nSalesforce enforces per-org API request limits. Setting this too high\nmay exhaust your org's API call allocation faster.\n","minimum":1,"maximum":25,"default":5},"targetConcurrencyLevel":{"type":"number","description":"Target concurrency level for auto-scaling. The system adjusts concurrency\nbetween 1 and this value based on rate limit feedback from Salesforce.\n","minimum":1,"maximum":25}}},"FTP":{"type":"object","description":"Configuration for FTP/SFTP/FTPS file transfer connections. Used when the connection type is \"ftp\".\n\nSupports three file transfer protocols:\n- FTP — Standard File Transfer Protocol (unencrypted). Default port 21.\n- SFTP — SSH File Transfer Protocol (encrypted via SSH). Default port 22.\n- FTPS — FTP over TLS/SSL (encrypted via TLS). Default port 21 (explicit) or 990 (implicit).\n\n**Required fields**\n- type (ftp, sftp, or ftps)\n- hostURI (server hostname or IP)\n- username\n\n**Authentication**\n- FTP/FTPS: username + password\n- SFTP: username + password, OR username + authKey (SSH private key), OR both\n\n**Pgp encryption**\nOptional PGP encryption/decryption can be enabled for file-level security,\nindependent of transport-level encryption. Requires pgpEncryptKey and/or pgpDecryptKey.\n","required":["type","hostURI","username"],"properties":{"type":{"type":"string","enum":["ftp","sftp","ftps"],"description":"File transfer protocol type. REQUIRED.\n\n- \"ftp\" — Standard FTP. No transport encryption. Default port 21.\n- \"sftp\" — SSH-based file transfer. Encrypted transport. Default port 22.\n  Supports SSH key authentication via the authKey field.\n- \"ftps\" — FTP over TLS/SSL. Encrypted transport. Default port 21 (explicit TLS)\n  or 990 (implicit TLS — set useImplicitFtps to true).\n\nPrefer \"sftp\" for security. Use \"ftps\" when the server requires TLS.\nOnly use \"ftp\" for legacy systems that don't support encryption.\n"},"hostURI":{"type":"string","description":"FTP server hostname or IP address. REQUIRED.\nDo NOT include the protocol prefix (e.g., use \"ftp.example.com\", not \"sftp://ftp.example.com\").\n"},"port":{"type":"number","description":"Server port number. Defaults vary by type:\n- FTP: 21\n- SFTP: 22\n- FTPS (explicit): 21\n- FTPS (implicit): 990\n","minimum":1,"maximum":65535,"default":21},"username":{"type":"string","description":"Username for server authentication. REQUIRED."},"password":{"type":"string","description":"Password for server authentication (encrypted at rest).\nFor SFTP, either password or authKey (SSH key) is required.\n","writeOnly":true},"authKey":{"type":"string","description":"SSH private key for SFTP key-based authentication (encrypted at rest).\nOnly used when type is \"sftp\". Provide the full PEM-encoded private key.\nCan be used alone or alongside a password for two-factor auth.\n","writeOnly":true},"usePassiveMode":{"type":"boolean","description":"When true, uses passive mode for FTP/FTPS data connections.\nIn passive mode, the client initiates both control and data connections,\nwhich works better through firewalls and NAT. Enable for most scenarios.\n","default":true},"enableHostVerification":{"type":"boolean","description":"When true, verifies the server's SSH host key (SFTP) or TLS certificate (FTPS).\nDisable only for development/testing with self-signed certificates.\n","default":true},"userDirectoryIsRoot":{"type":"boolean","description":"When true, treats the user's home directory as the root directory.\nAll paths are relative to the user's home directory rather than the server root.\n","default":false},"useImplicitFtps":{"type":"boolean","description":"When true, uses implicit FTPS (TLS connection established immediately on port 990).\nWhen false, uses explicit FTPS (starts as FTP on port 21, upgrades to TLS via STARTTLS).\nOnly applies when type is \"ftps\".\n","default":false},"requireSocketReUse":{"type":"boolean","description":"When true, requires the data connection to reuse the same TLS session as the control connection.\nSome FTPS servers require this for security. Only applies to FTPS connections.\n","default":false},"entryParser":{"type":"string","enum":["UNIX","UNIX-TRIM","VMS","WINDOWS","OS/2","OS/400","AS/400","MVS","UNKNOWN-TYPE","NETWARE","MACOS-PETER"],"description":"File listing format parser. Controls how directory listings from the server are interpreted.\nMost servers use UNIX format. Only change this if directory listings appear garbled.\n\n- \"UNIX\" — Standard Unix/Linux servers (most common).\n- \"WINDOWS\" — Windows FTP servers using DOS-style listings.\n- \"MVS\" — IBM mainframe MVS systems.\n- \"AS/400\" — IBM AS/400 (iSeries) systems.\n- Other values are for specific legacy server platforms.\n"},"pgpEncryptKey":{"type":"string","description":"PGP public key for encrypting files before upload (encrypted at rest).\nWhen set, files are PGP-encrypted before being sent to the server.\n","writeOnly":true},"pgpDecryptKey":{"type":"string","description":"PGP private key for decrypting files after download (encrypted at rest).\nWhen set, downloaded files are automatically PGP-decrypted.\n","writeOnly":true},"pgpPassphrase":{"type":"string","description":"Passphrase for the PGP private key (encrypted at rest).","writeOnly":true},"pgpKeyAlgorithm":{"type":"string","enum":["CAST5","3DES","AES-128","AES-192","AES-256"],"description":"Symmetric encryption algorithm used for PGP operations.\nAES-256 is recommended for strong encryption. CAST5 is the PGP default.\nMust match the algorithm expected by the recipient when encrypting.\n"},"pgpSignAndVerify":{"type":"boolean","description":"When true, PGP-signs outbound files and verifies signatures on inbound files.\nProvides authenticity and integrity verification on top of encryption.\n","default":false},"tradingPartner":{"type":"boolean","description":"When true, this connection is associated with a trading partner configuration\nfor B2B/EDI file exchanges.\n","default":false},"_tpConnectorId":{"type":"string","format":"objectId","description":"Reference to a trading partner connector."},"concurrencyLevel":{"type":"number","description":"Maximum number of concurrent file transfer operations.\nFTP servers often have low connection limits — keep this value conservative.\n","minimum":1,"maximum":10,"default":1},"targetConcurrencyLevel":{"type":"number","description":"Target concurrency level for auto-scaling. The system adjusts concurrency\nbetween 1 and this value based on server response feedback.\n","minimum":1,"maximum":10}}},"S3":{"type":"object","description":"Configuration for Amazon S3 connections. Used when the connection type is \"s3\".\n\nProvides access to Amazon S3 buckets for file-based integrations (upload, download,\nlist, and delete operations). Uses AWS IAM credentials for authentication.\n\n**Required fields**\n- accessKeyId (AWS access key)\n- secretAccessKey (AWS secret key)\n\n**Ping**\nSet pingBucket to an accessible S3 bucket name so Celigo can verify\nthe connection credentials are valid.\n","required":["accessKeyId","secretAccessKey"],"properties":{"accessKeyId":{"type":"string","description":"AWS access key ID for IAM authentication. REQUIRED.\nFrom an IAM user or role with S3 access permissions (s3:GetObject, s3:PutObject, s3:ListBucket, etc.).\n"},"secretAccessKey":{"type":"string","description":"AWS secret access key (encrypted at rest). REQUIRED.\nPaired with the accessKeyId. Treat as a sensitive credential.\n","writeOnly":true},"pingBucket":{"type":"string","description":"S3 bucket name used for connection health checks (ping).\nThe system performs a HEAD request on this bucket to verify credentials.\nMust be a bucket the IAM credentials have access to.\n"},"concurrencyLevel":{"type":"number","description":"Maximum number of concurrent S3 operations.\nS3 has high throughput limits, so this can be set relatively high.\n","minimum":1,"maximum":100,"default":5},"targetConcurrencyLevel":{"type":"number","description":"Target concurrency level for auto-scaling. The system adjusts concurrency\nbetween 1 and this value based on performance feedback.\n","minimum":1,"maximum":100}}},"REST":{"type":"object","description":"Configuration for REST connector connections. Used when the connection type is \"rest\".\n\nREST connections are template-driven HTTP connections that use a pre-built HTTP Connector\ndefinition. The connector template pre-configures authentication, base URI, and other\nsettings — the user only fills in credentials and account-specific values.\n\n**Rest vs http connections**\n- Use \"rest\" when connecting to an application that has a Celigo HTTP Connector template\n  (referenced via _httpConnectorId). The connector provides pre-configured API settings.\n- Use \"http\" for fully custom API connections where you configure everything manually.\n\n**Authentication**\nThe authType field selects the authentication method. Available methods depend on the\nconnector template. Credentials are stored in the appropriate auth sub-fields (basicAuth,\nbearerToken, refreshToken, oauth, etc.).\n","required":["mediaType","baseURI","authType"],"properties":{"mediaType":{"type":"string","enum":["json","urlencoded","xml","csv"],"description":"Default content type for API requests and responses.\n\n- \"json\" — application/json (most common for REST APIs).\n- \"xml\" — application/xml (for XML-based APIs).\n- \"urlencoded\" — application/x-www-form-urlencoded (for form-style APIs).\n- \"csv\" — text/csv (for CSV-based data exchange).\n"},"baseURI":{"type":"string","description":"Base URL for all API requests. REQUIRED.\nAll relative URIs in exports and imports are appended to this base URL.\nMay be pre-populated by the HTTP Connector template.\n","format":"uri"},"authType":{"type":"string","enum":["basic","token","oauth","custom","cookie","jwt","hmac","wsse","oauth1"],"description":"Authentication method for this connection. REQUIRED.\n\n- \"basic\" — HTTP Basic Auth. Requires basicAuth.username and basicAuth.password.\n- \"token\" — Bearer token or API key. Requires bearerToken. Supports auto-refresh.\n- \"oauth\" — OAuth 2.0. Requires oauth configuration and typically authURI/oauthTokenURI.\n- \"cookie\" — Cookie-based session auth. Requires cookieAuth configuration.\n- \"jwt\" — JWT-based authentication.\n- \"hmac\" — HMAC signature authentication.\n- \"wsse\" — WS-Security. Requires basicAuth credentials.\n- \"oauth1\" — OAuth 1.0a. Requires oauth.oauth1 configuration.\n- \"custom\" — No built-in auth; credentials in headers or encrypted fields.\n"},"authURI":{"type":"string","description":"OAuth 2.0 authorization endpoint URI.\nUsed for OAuth authorization code flow where users authorize via browser.\n","format":"uri"},"oauthTokenURI":{"type":"string","description":"OAuth 2.0 token endpoint URI.\nUsed to exchange authorization codes for access tokens and to refresh tokens.\n","format":"uri"},"disableStrictSSL":{"type":"boolean","description":"When true, disables strict SSL/TLS certificate validation.\nOnly use for development/testing with self-signed certificates.\n","default":false},"skipOauthValidations":{"type":"boolean","description":"When true, skips Celigo's built-in OAuth configuration validation.\nUse when the connector has non-standard OAuth requirements.\n","default":false},"isHTTPProxy":{"type":"boolean","description":"When true, this REST connection acts as an HTTP proxy for another connection.\n","default":false},"authHeader":{"type":"string","description":"Custom HTTP header name for the authorization token.\nDefaults to \"Authorization\". Change only if the API uses a non-standard header.\n","default":"Authorization"},"retryHeader":{"type":"string","description":"HTTP response header name containing retry-after delay for rate-limited requests.\nDefaults to \"Retry-After\" (HTTP standard).\n","default":"Retry-After"},"authScheme":{"type":"string","enum":["MAC","OAuth","Bearer","Hmac"],"description":"Authorization header scheme/prefix prepended before the token value.\nProduces headers like \"Authorization: Bearer <token>\".\nDefaults to \"Bearer\" which is the most common scheme.\n","default":"Bearer"},"scope":{"type":"array","items":{"type":"string"},"description":"OAuth scopes to request during authorization. Controls the level of API access granted.\n"},"scopeDelimiter":{"type":"string","description":"Delimiter character between multiple OAuth scopes.\nDefaults to a space (\" \") per the OAuth 2.0 spec. Some APIs use commas.\n","default":" "},"concurrencyLevel":{"type":"number","description":"Maximum number of concurrent API requests.\nSet based on the target API's rate limit documentation.\n","minimum":1,"maximum":100,"default":5},"targetConcurrencyLevel":{"type":"number","description":"Target concurrency level for auto-scaling. The system adjusts concurrency\nbetween 1 and this value based on rate limit feedback.\n","minimum":1,"maximum":100},"bearerToken":{"type":"string","description":"Bearer token or API key for authentication (encrypted at rest).\nUsed when authType is \"token\".\n","writeOnly":true},"refreshToken":{"type":"string","description":"OAuth refresh token (encrypted at rest).\nUsed to obtain new access tokens when they expire.\n","writeOnly":true},"tokenLocation":{"type":"string","enum":["header","url"],"description":"Where to include the access token in outbound requests.\n- \"header\" — Sent in the Authorization header (default, most common).\n- \"url\" — Sent as a URL query parameter (use tokenParam for the parameter name).\n","default":"header"},"tokenParam":{"type":"string","description":"URL query parameter name for the access token when tokenLocation is \"url\".\n","default":"access_token"},"basicAuth":{"type":"object","description":"Basic authentication credentials. REQUIRED when authType is \"basic\" or \"wsse\".\n","properties":{"username":{"type":"string","description":"Username for Basic authentication."},"password":{"type":"string","description":"Password for Basic authentication (encrypted at rest).","writeOnly":true}}},"cookieAuth":{"type":"object","description":"Cookie-based session authentication configuration.\nREQUIRED when authType is \"cookie\".\n","properties":{"uri":{"type":"string","description":"Login endpoint URI. REQUIRED. The system sends a request here to obtain session cookies."},"body":{"type":"string","description":"Request body for the login request (e.g., JSON with credentials)."},"method":{"type":"string","description":"HTTP method for the login request (typically POST)."},"successStatusCode":{"type":"number","description":"HTTP status code that confirms successful authentication."}}},"oauth":{"$ref":"#/components/schemas/OAuth"},"refreshTokenMethod":{"type":"string","enum":["POST","PUT","GET"],"description":"HTTP method for token refresh requests.\n","default":"POST"},"refreshTokenBody":{"type":"string","description":"Request body template for token refresh requests."},"refreshTokenURI":{"type":"string","description":"URI for the token refresh endpoint.","format":"uri"},"refreshTokenPath":{"type":"string","description":"JSON path to extract the new access token from the refresh response.\nExample: \"access_token\" or \"data.token\".\n"},"refreshTokenMediaType":{"type":"string","enum":["json","urlencoded"],"description":"Content type for token refresh request bodies.","default":"json"},"refreshTokenHeaders":{"type":"array","description":"Additional headers to include in token refresh requests.","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}},"required":["name","value"]}},"headers":{"type":"array","description":"Default HTTP headers included in every API request.\nUse for API keys in custom headers, content negotiation, or required headers.\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Header name."},"value":{"type":"string","description":"Header value. Supports handlebars expressions."}},"required":["name","value"]}},"pingRelativeURI":{"type":"string","description":"Relative URI (appended to baseURI) for connection health check requests.\nShould be a lightweight endpoint (e.g., \"/me\", \"/health\").\n"},"pingSuccessPath":{"type":"string","description":"JSON path to extract a success indicator from the ping response.\nIf the value matches one of pingSuccessValues, the ping succeeds.\n"},"pingSuccessValues":{"type":"array","items":{"type":"string"},"description":"Values that indicate a successful ping at pingSuccessPath."},"pingFailurePath":{"type":"string","description":"JSON path to extract a failure indicator from the ping response.\n"},"pingFailureValues":{"type":"array","items":{"type":"string"},"description":"Values that indicate a failed ping at pingFailurePath."},"pingMethod":{"type":"string","enum":["GET","POST"],"description":"HTTP method for ping requests.","default":"GET"},"pingBody":{"type":"string","description":"Request body for ping requests (when pingMethod is POST)."},"encrypted":{"type":"object","description":"Encrypted custom fields for sensitive configuration values.\nField definitions are in encryptedFields.\n"},"encryptedFields":{"type":"array","description":"Metadata defining the encrypted custom fields on this connection.","items":{"type":"object","properties":{"id":{"type":"string","description":"Field identifier — matches the key in the encrypted object."},"label":{"type":"string","description":"Human-readable label shown in the UI."},"required":{"type":"boolean","default":false},"position":{"type":"number","description":"Display order in the UI form."},"helpText":{"type":"string","description":"Tooltip text shown next to the field in the UI."}}}},"unencrypted":{"type":"object","description":"Unencrypted custom fields for non-sensitive configuration values.\nField definitions are in unencryptedFields.\n"},"unencryptedFields":{"type":"array","description":"Metadata defining the unencrypted custom fields on this connection.","items":{"type":"object","properties":{"id":{"type":"string","description":"Field identifier — matches the key in the unencrypted object."},"label":{"type":"string","description":"Human-readable label shown in the UI."},"required":{"type":"boolean","default":false},"position":{"type":"number","description":"Display order in the UI form."},"helpText":{"type":"string","description":"Tooltip text shown next to the field in the UI."},"type":{"type":"string","description":"Field type hint for the UI (e.g., \"text\", \"select\")."}}}},"info":{"type":"object","description":"Additional metadata about the REST connection, populated by the system.","readOnly":true}}},"OAuth":{"type":"object","description":"OAuth 2.0 and OAuth 1.0a authentication configuration.\n\nUsed as a sub-object within HTTP and REST connection auth configurations.\nSupports three OAuth 2.0 grant types and OAuth 1.0a for legacy APIs.\n\n**Oauth 2.0 grant types**\n- \"authorizationcode\" — Authorization Code flow (browser-based user consent).\n  Requires authURI, tokenURI, and user authorization via browser.\n- \"clientcredentials\" — Client Credentials flow (server-to-server, no user).\n  Requires tokenURI, clientId, clientSecret.\n- \"password\" — Resource Owner Password flow (username/password exchange).\n  Requires tokenURI, username, password, clientId, clientSecret.\n\n**Oauth 1.0a**\nFor OAuth 1.0a APIs, configure the oauth1 sub-object with consumer key/secret\nand access token/secret. Supports HMAC, RSA, and PLAINTEXT signature methods.\n","properties":{"type":{"type":"string","enum":["custom","assistant"],"description":"OAuth configuration mode.\n- \"custom\" — Fully user-configured OAuth settings.\n- \"assistant\" — Pre-configured via an application assistant connector.\n"},"grantType":{"type":"string","enum":["authorizationcode","clientcredentials","password","implicit"],"description":"OAuth 2.0 grant type that determines the authentication flow.\n\n- \"authorizationcode\" — Authorization Code flow. Most secure. Requires user\n  authorization via browser redirect. Use for integrations that act on behalf of a user.\n- \"clientcredentials\" — Client Credentials flow. Server-to-server authentication\n  without user involvement. Use for machine-to-machine integrations.\n- \"password\" — Resource Owner Password Credentials flow. Exchanges username/password\n  directly for tokens. Only use when the API does not support other flows.\n- \"implicit\" — Implicit flow (legacy, not recommended for new integrations).\n","default":"authorizationcode"},"authURI":{"type":"string","format":"uri","description":"OAuth 2.0 authorization endpoint URL.\nREQUIRED for \"authorizationcode\" grant type. The user is redirected to this URL\nto authorize the application.\n"},"tokenURI":{"type":"string","format":"uri","description":"OAuth 2.0 token endpoint URL.\nREQUIRED for \"authorizationcode\", \"clientcredentials\", and \"password\" grant types.\nThe system exchanges credentials or authorization codes for access tokens at this URL.\n"},"skipOauthValidations":{"type":"boolean","description":"When true, skips Celigo's built-in OAuth configuration validation.\nUse when the API has non-standard OAuth requirements that conflict with validation rules.\n","default":false},"scope":{"type":"array","items":{"type":"string"},"description":"OAuth scopes to request during authorization. Controls the level of API access.\nScope values are API-specific (e.g., \"read\", \"write\", \"admin\").\n"},"scopeDelimiter":{"type":"string","description":"Delimiter between multiple scope values. Defaults to a space (\" \") per the\nOAuth 2.0 spec. Some APIs use commas or other delimiters.\n","default":" "},"clientId":{"type":"string","description":"OAuth client ID (application ID) registered with the API provider.\nREQUIRED for all OAuth 2.0 grant types.\n"},"clientSecret":{"type":"string","description":"OAuth client secret (encrypted at rest).\nREQUIRED for \"authorizationcode\" and \"clientcredentials\" grant types.\n","writeOnly":true},"username":{"type":"string","description":"Resource owner username. REQUIRED when grantType is \"password\".\n"},"password":{"type":"string","description":"Resource owner password (encrypted at rest). REQUIRED when grantType is \"password\".\n","writeOnly":true},"clientCredentialsLocation":{"type":"string","enum":["header","body"],"description":"Where to send client credentials in token requests.\n\n- \"header\" — Send as HTTP Basic Auth header (default, recommended by OAuth spec).\n- \"body\" — Send as form parameters in the request body.\n  Use when the API does not support Basic Auth for client credentials.\n","default":"header"},"accessTokenPath":{"type":"string","description":"JSON path to extract the access token from the token endpoint response.\nDefaults to \"access_token\" per the OAuth 2.0 spec.\nChange only if the API returns the token at a non-standard path.\n"},"accessTokenHeaders":{"type":"array","description":"Additional HTTP headers to include in token endpoint requests.\nUse for APIs that require custom headers beyond the standard OAuth parameters.\n","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}}},"accessTokenBody":{"type":"string","description":"Additional body content to include in token endpoint requests.\nAppended to the standard OAuth parameters.\n"},"oauth2RedirectUrl":{"type":"string","description":"OAuth 2.0 redirect URI (callback URL) registered with the API provider.\nMust exactly match the redirect URI configured in the OAuth application registration.\n"},"useIClientFields":{"type":"boolean","description":"When true, uses iClient-managed OAuth credentials (clientId/clientSecret)\ninstead of the values in this configuration.\n"},"oauth1":{"type":"object","description":"OAuth 1.0a configuration for legacy APIs that use the older OAuth protocol.\n\n**Required fields**\n- consumerKey (always required)\n- accessToken (always required)\n- For HMAC methods: consumerSecret and tokenSecret\n- For RSA methods: consumerPrivateKey\n","properties":{"consumerKey":{"type":"string","description":"OAuth 1.0a consumer key (API key). REQUIRED.\nIdentifies the application making the request.\n"},"consumerSecret":{"type":"string","description":"OAuth 1.0a consumer secret (encrypted at rest).\nREQUIRED for HMAC and PLAINTEXT signature methods.\n","writeOnly":true},"accessToken":{"type":"string","description":"OAuth 1.0a access token (encrypted at rest). REQUIRED.\nRepresents the user's authorization for the application to access their data.\n","writeOnly":true},"tokenSecret":{"type":"string","description":"OAuth 1.0a token secret (encrypted at rest).\nREQUIRED for HMAC and PLAINTEXT signature methods.\n","writeOnly":true},"signatureMethod":{"type":"string","enum":["HMAC-SHA1","HMAC-SHA256","HMAC-SHA512","RSA-SHA1","RSA-SHA256","RSA-SHA512","PLAINTEXT"],"description":"OAuth 1.0a request signing method.\n\nHMAC methods (require consumerSecret and tokenSecret):\n- \"HMAC-SHA1\" — Most widely supported (legacy).\n- \"HMAC-SHA256\" — More secure HMAC variant.\n- \"HMAC-SHA512\" — Strongest HMAC variant.\n\nRSA methods (require consumerPrivateKey):\n- \"RSA-SHA1\" — RSA signing (legacy).\n- \"RSA-SHA256\" — More secure RSA variant.\n- \"RSA-SHA512\" — Strongest RSA variant.\n\n- \"PLAINTEXT\" — No cryptographic signing. Only for testing over HTTPS.\n"},"consumerPrivateKey":{"type":"string","description":"RSA private key for OAuth 1.0a RSA signature methods (encrypted at rest).\nREQUIRED when signatureMethod is RSA-SHA1, RSA-SHA256, or RSA-SHA512.\n","writeOnly":true},"realm":{"type":"string","description":"OAuth realm value included in the Authorization header.\nSome APIs require this to identify the authentication domain.\n"}}},"pkceCodeVerifier":{"type":"string","description":"PKCE (Proof Key for Code Exchange) code verifier for enhanced OAuth 2.0 security.\nManaged internally by the system during authorization code flows.\n","writeOnly":true}}},"HTTP":{"type":"object","description":"Configuration for HTTP/REST API connections. Used when the connection type is \"http\".\n\nThis is the most versatile connection type in Celigo, supporting any REST, SOAP, or generic HTTP API.\nIt handles authentication, request/response formatting, rate limiting, and connection health checks.\n\n**When to use**\n- Any REST or SOAP API not covered by a specialized connector (Salesforce, NetSuite, etc.)\n- Custom HTTP integrations with APIs that use standard auth methods\n- GraphQL APIs (set formType to \"graph_ql\")\n- Amazon Selling Partner API (set type to \"Amazon-SP-API\")\n\n**Authentication models**\nThe auth.type field selects the authentication strategy. Each type requires specific sub-fields:\n- \"basic\" — HTTP Basic Auth (username/password)\n- \"token\" — Bearer token or API key with optional auto-refresh\n- \"oauth\" — OAuth 2.0 (authorization code, client credentials, or password grant)\n- \"jwt\" / \"jwtbearer\" — JWT-based authentication with HMAC or RSA signing\n- \"cookie\" — Cookie-based session authentication\n- \"digest\" — HTTP Digest Authentication\n- \"oauth1\" — OAuth 1.0a (HMAC or RSA signatures)\n- \"custom\" — No built-in auth; credentials go in headers or encrypted fields\n- \"wsse\" — WS-Security UsernameToken (SOAP APIs)\n- \"specific\" — Platform-specific auth (e.g., PTX)\n","required":["mediaType"],"properties":{"mediaType":{"type":"string","enum":["xml","json","urlencoded","form-data","plaintext"],"description":"Default content type for outbound HTTP request bodies. REQUIRED.\n\nThis controls the Content-Type header and how request bodies are serialized:\n- \"json\" — application/json. Use for most modern REST APIs.\n- \"xml\" — application/xml. Use for SOAP or XML-based APIs.\n- \"urlencoded\" — application/x-www-form-urlencoded. Use for form-style POST bodies.\n- \"form-data\" — multipart/form-data. Use for file uploads or multipart requests.\n- \"plaintext\" — text/plain. Use for raw text payloads.\n\nDefault to \"json\" unless the API documentation specifies otherwise.\n"},"successMediaType":{"type":"string","enum":["xml","csv","json","plaintext"],"description":"Expected content type of successful API responses. Controls how response bodies are parsed.\n\n- \"json\" — Parse response as JSON (default for most APIs).\n- \"xml\" — Parse response as XML.\n- \"csv\" — Parse response as CSV.\n- \"plaintext\" — Treat response as raw text, no parsing.\n\nIf omitted, the system infers the format from the response Content-Type header.\n"},"errorMediaType":{"type":"string","enum":["xml","json","plaintext"],"description":"Expected content type of error responses from the API. Controls how error response bodies are parsed for extracting error messages.\n\nIf omitted, defaults to the same format as successMediaType.\n"},"baseURI":{"type":"string","description":"Base URL for all API requests made through this connection. REQUIRED.\n\nAll relative URIs in exports and imports are appended to this base URL.\nMust be an absolute URL (e.g., \"https://api.example.com/v2\").\nHandlebars expressions are supported for dynamic URLs.\n\nDo NOT include trailing slashes — relative URIs in exports/imports should start with \"/\".\n","format":"uri"},"disableStrictSSL":{"type":"boolean","description":"When true, disables strict SSL/TLS certificate validation for API requests.\n\nOnly set to true for development/testing with self-signed certificates.\nNEVER disable in production — it removes protection against man-in-the-middle attacks.\n","default":false},"concurrencyLevel":{"type":"number","description":"Maximum number of concurrent HTTP requests this connection can make simultaneously.\n\nHigher values increase throughput but may trigger API rate limits.\nSet this based on the target API's rate limit documentation.\n","minimum":1,"maximum":100,"default":25},"targetConcurrencyLevel":{"type":"number","description":"Target concurrency level for auto-scaling. The system automatically adjusts\nconcurrency between 1 and this value based on rate limit feedback.\n\nOnly relevant when autoRecoverRateLimitErrors is enabled on the connection.\n","minimum":1,"maximum":100},"retryHeader":{"type":"string","description":"HTTP response header name that contains the retry delay (in seconds) when rate-limited.\n\nDefaults to \"Retry-After\" which is the HTTP standard. Only change this if the API\nuses a non-standard header name for retry-after values.\n","default":"Retry-After"},"formType":{"type":"string","enum":["assistant","rest","http","graph_ql","assistant_graphql"],"description":"Controls the UI form layout for configuring this connection. Determines which\nfields are shown and how they are organized in the Celigo UI.\n\n- \"http\" — Standard HTTP connection form with full control over all fields.\n- \"rest\" — Simplified REST connector form linked to an HTTP connector template.\n- \"assistant\" — Application assistant form with pre-configured settings.\n- \"graph_ql\" — GraphQL-specific form with query/mutation support.\n- \"assistant_graphql\" — Application assistant form with GraphQL support.\n\nFor programmatic creation, \"http\" is the most common choice.\n"},"type":{"type":"string","enum":["Amazon-SP-API","Amazon-Hybrid","vendor_central"],"description":"Specific API type for Amazon integrations. Only set this for Amazon connections.\n\n- \"Amazon-SP-API\" — Amazon Selling Partner API.\n- \"Amazon-Hybrid\" — Hybrid Amazon connection supporting both MWS and SP-API.\n- \"vendor_central\" — Amazon Vendor Central API.\n"},"clientCertificates":{"type":"object","description":"Client certificate configuration for mutual TLS (mTLS) authentication.\n\nUse when the API server requires a client certificate to establish the TLS connection.\nYou can provide either a PEM cert/key pair OR a PFX bundle, but not both.\n","properties":{"cert":{"type":"string","description":"Client certificate in PEM format. Must be paired with the key field.\nCannot be used together with pfx.\n"},"key":{"type":"string","description":"Private key for the client certificate in PEM format (encrypted at rest).","writeOnly":true},"ca":{"type":"string","description":"Certificate Authority (CA) certificate in PEM format.\nUse when the server's certificate is signed by a private CA not in the default trust store.\n"},"passphrase":{"type":"string","description":"Passphrase to decrypt an encrypted private key or PFX bundle (encrypted at rest).","writeOnly":true},"pfx":{"type":"string","description":"PKCS#12 (.pfx/.p12) bundle containing both the certificate and private key (encrypted at rest).\nCannot be used together with cert/key.\n","writeOnly":true}}},"ping":{"type":"object","description":"Connection health check (ping) configuration. Defines how Celigo tests\nwhether this connection is alive and authenticated.\n\nWhen configured, Celigo sends an HTTP request to the specified endpoint and\nevaluates the response to determine connection health. The ping runs when\ntesting the connection in the UI and periodically during flow execution.\n","properties":{"relativeURI":{"type":"string","description":"Relative URI appended to baseURI for the ping request.\nShould be a lightweight, fast endpoint (e.g., \"/me\", \"/health\", \"/api/v1/status\").\n"},"method":{"type":"string","enum":["GET","POST","PUT","HEAD"],"description":"HTTP method for the ping request. Defaults to GET.\nUse POST only if the health endpoint requires it.\n","default":"GET"},"body":{"type":"string","description":"Request body for the ping request. Only used when method is POST or PUT.\nFor form-data mediaType, must be valid multipart form data.\n"},"successPath":{"type":"string","description":"JSON path or XPath expression to extract a success indicator from the ping response.\nIf the value at this path matches one of the successValues, the ping succeeds.\nIf omitted, any 2xx response is considered successful.\n"},"successValues":{"type":"array","items":{"type":"string"},"description":"Values that indicate a successful ping when found at successPath.\nRequires successPath to be set.\n"},"allowArrayforSuccessPath":{"type":"boolean","description":"Whether the value at successPath can be an array (any matching element counts as success)."},"failPath":{"type":"string","description":"JSON path or XPath expression to extract a failure indicator from the ping response.\nIf the value at this path matches one of the failValues, the ping fails even if the HTTP status is 2xx.\n"},"failValues":{"type":"array","items":{"type":"string"},"description":"Values that indicate a failed ping when found at failPath.\nRequires failPath to be set.\n"},"errorPath":{"type":"string","description":"JSON path or XPath expression to extract a human-readable error message from\na failed ping response. The extracted message is shown to the user in the UI.\n"}}},"auth":{"type":"object","description":"Authentication configuration for the API connection.\n\nThe auth.type field selects the authentication strategy. Each type requires\nspecific sub-fields — see the type field description for details.\n","properties":{"type":{"type":"string","enum":["custom","basic","token","oauth","wsse","cookie","jwt","digest","specific","oauth1","jwtbearer"],"description":"Authentication method for this connection. Determines which auth sub-fields are required.\n\n- \"basic\" — HTTP Basic Auth. Requires: auth.basic.username and auth.basic.password.\n- \"token\" — API key or bearer token. Requires: auth.token with token value and location.\n  Supports automatic token refresh via refreshMethod/refreshRelativeURI.\n- \"oauth\" — OAuth 2.0. Requires: auth.oauth with grant type, URIs, and credentials.\n  Supports authorization code, client credentials, and password grant flows.\n- \"jwtbearer\" — JWT Bearer Token. Requires: auth.jwt with signatureMethod and payload.\n  HMAC methods need a secret; RSA/EC methods need a privateKey.\n- \"cookie\" — Cookie-based session auth. Requires: auth.cookie.uri (login endpoint).\n- \"digest\" — HTTP Digest Auth. Requires: auth.basic.username and auth.basic.password.\n- \"oauth1\" — OAuth 1.0a. Requires: auth.oauth.oauth1 with consumer key/secret and tokens.\n- \"custom\" — No built-in auth handling. Put credentials in headers or encrypted fields.\n- \"wsse\" — WS-Security. Requires: auth.basic.username and auth.basic.password.\n- \"specific\" — Platform-specific auth (e.g., PTX).\n- \"jwt\" — Legacy JWT auth. Prefer \"jwtbearer\" for new connections.\n"},"failStatusCode":{"type":"number","description":"HTTP status code that indicates an authentication failure (e.g., 401, 403).\nWhen this status code is received, the system triggers re-authentication\nbefore retrying the request.\n"},"failPath":{"type":"string","description":"JSON path or XPath expression to check in response bodies for authentication failure indicators.\nUsed when APIs return 200 OK but embed auth errors in the response body.\n"},"failValues":{"type":"array","items":{"type":"string"},"description":"Values at failPath that indicate an authentication failure.\nRequires failPath to be set.\n"},"skipFollowAuthorizationHeader":{"type":"boolean","description":"When true, the Authorization header is NOT forwarded on HTTP redirects.\nEnable this for APIs that redirect to a different domain after authentication.\n"},"basic":{"type":"object","description":"Basic authentication credentials. REQUIRED when auth.type is \"basic\", \"wsse\", or \"digest\".\n","properties":{"username":{"type":"string","description":"Username for Basic/Digest/WSSE authentication."},"password":{"type":"string","description":"Password for Basic/Digest/WSSE authentication (encrypted at rest).","writeOnly":true}}},"token":{"type":"object","description":"Token-based authentication configuration. REQUIRED when auth.type is \"token\".\n\nSupports static API keys/bearer tokens and automatic token refresh flows.\nThe token can be sent in a header (Authorization), query parameter, or request body.\n","properties":{"token":{"type":"string","description":"The API key or bearer token value (encrypted at rest).\nREQUIRED unless automatic token refresh is configured.\n","writeOnly":true},"location":{"type":"string","enum":["url","header","body"],"description":"Where to include the token in outbound requests.\n- \"header\" — Sent in an HTTP header (most common). Use headerName and scheme to control format.\n- \"url\" — Sent as a URL query parameter. Use paramName to set the parameter name.\n- \"body\" — Included in the request body.\n"},"headerName":{"type":"string","description":"HTTP header name for the token when location is \"header\".\nDefaults to \"Authorization\" if omitted.\n"},"scheme":{"type":"string","description":"Token scheme/prefix when sent in a header. Prepended before the token value.\nCommon values: \"Bearer\", \"Token\", \"Basic\".\nExample: scheme \"Bearer\" produces header \"Authorization: Bearer <token>\".\n"},"paramName":{"type":"string","description":"Query parameter name for the token when location is \"url\".\nExample: paramName \"api_key\" produces URL \"?api_key=<token>\".\n"},"refreshMethod":{"type":"string","enum":["GET","POST","PUT"],"description":"HTTP method for automatic token refresh requests.\nREQUIRED when no static token is provided (refresh-based auth flow).\n","default":"POST"},"refreshRelativeURI":{"type":"string","description":"Relative URI (appended to baseURI) for the token refresh endpoint.\nThe system calls this endpoint to obtain a new token when the current one expires.\n"},"refreshBody":{"type":"string","description":"Request body to send with the token refresh request."},"refreshMediaType":{"type":"string","enum":["json","urlencoded","xml","plaintext"],"description":"Content type for the token refresh request body.\n","default":"urlencoded"},"refreshResponseMediaType":{"type":"string","enum":["json","xml"],"description":"Expected content type of the token refresh response."},"refreshTokenPath":{"type":"string","description":"JSON path to extract the new token from the refresh response.\nExample: \"access_token\" or \"data.token\".\n"},"refreshToken":{"type":"string","description":"Refresh token used to obtain a new access token (encrypted at rest).","writeOnly":true},"refreshTokenLocation":{"type":"string","enum":["header","body"],"description":"Where to include the refresh token in refresh requests."},"refreshHeaders":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}},"description":"Additional headers to include in token refresh requests."},"tokenPaths":{"type":"array","items":{"type":"string"},"description":"JSON paths to extract multiple token values from the refresh response.\nUse when the refresh response contains tokens at different paths that need\nto be stored for subsequent requests.\n"}}},"oauth":{"$ref":"#/components/schemas/OAuth"},"cookie":{"type":"object","description":"Cookie-based session authentication. REQUIRED when auth.type is \"cookie\".\n\nThe system authenticates by sending a request to the login URI, captures the\nsession cookies from the response, and includes them in all subsequent API requests.\n","properties":{"uri":{"type":"string","description":"Login endpoint URI for cookie authentication. REQUIRED.\nThe system sends a request to this URI to obtain session cookies.\n"},"body":{"type":"string","description":"Request body for the login request (e.g., JSON with username/password)."},"method":{"type":"string","description":"HTTP method for the login request (typically POST)."},"successStatusCode":{"type":"number","description":"HTTP status code that confirms successful authentication.\nIf the login response returns this status code, the session cookies are captured.\n"}}},"jwt":{"$ref":"#/components/schemas/JWT"}}},"rateLimit":{"type":"object","description":"Rate limiting configuration. Defines how the system detects and handles\nAPI rate limit responses.\n\nWhen rate limiting is detected, the system pauses requests and waits for\nthe retry-after period before resuming. The retryHeader field on the parent\nHTTP object controls which response header contains the wait time.\n","properties":{"failStatusCode":{"type":"number","description":"HTTP status code that indicates the API is rate-limiting requests.\nDefaults to 429 (Too Many Requests) which is the HTTP standard.\nChange only if the API uses a non-standard status code for rate limits.\n","default":429},"failPath":{"type":"string","description":"JSON path or XPath to check in response bodies for rate limit indicators.\nUsed when APIs return 200 OK but embed rate limit errors in the response body.\n"},"failValues":{"type":"array","items":{"type":"string"},"description":"Values at failPath that indicate rate limiting. Requires failPath to be set.\n"},"limit":{"type":"number","minimum":1,"description":"Maximum number of requests per rate-limit window. When set, the connection's\neffective concurrency level must be 1 to ensure proper rate limit enforcement.\n"}}},"headers":{"type":"array","description":"Default HTTP headers included in every request made through this connection.\nUse for API keys in custom headers, content negotiation, or any headers the API requires on all requests.\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Header name (e.g., \"X-API-Key\", \"Accept\")."},"value":{"type":"string","description":"Header value. Supports handlebars expressions for dynamic values."}},"required":["name","value"]}},"encrypted":{"type":"object","description":"Encrypted custom fields for storing sensitive configuration values (API secrets,\nprivate keys, etc.) that don't fit standard auth fields. Values are encrypted at rest.\nField definitions are specified in encryptedFields.\n"},"encryptedFields":{"type":"array","description":"Metadata defining the encrypted custom fields available on this connection.\nEach entry describes a field in the encrypted object — its ID, label, and UI position.\n","items":{"type":"object","properties":{"id":{"type":"string","description":"Field identifier — matches the key in the encrypted object."},"label":{"type":"string","description":"Human-readable label shown in the UI."},"required":{"type":"boolean","default":false},"position":{"type":"number","description":"Display order in the UI form."},"helpText":{"type":"string","description":"Tooltip or help text shown next to the field in the UI."}}}},"unencrypted":{"type":"object","description":"Unencrypted custom fields for non-sensitive configuration values.\nField definitions are specified in unencryptedFields.\n"},"unencryptedFields":{"type":"array","description":"Metadata defining the unencrypted custom fields available on this connection.\nEach entry describes a field in the unencrypted object.\n","items":{"type":"object","properties":{"id":{"type":"string","description":"Field identifier — matches the key in the unencrypted object."},"label":{"type":"string","description":"Human-readable label shown in the UI."},"required":{"type":"boolean","default":false},"position":{"type":"number","description":"Display order in the UI form."},"helpText":{"type":"string","description":"Tooltip or help text shown next to the field in the UI."},"type":{"type":"string","description":"Field type hint for the UI (e.g., \"text\", \"select\")."}}}},"_iClientId":{"type":"string","format":"objectId","description":"ID of the iClient used for OAuth authentication."},"_httpConnectorId":{"type":"string","format":"objectId","description":"ID of the HTTP connector template this connection is based on."},"_httpConnectorApiId":{"type":"string","format":"objectId","description":"ID of the HTTP connector API definition."},"_httpConnectorVersionId":{"type":"string","format":"objectId","description":"ID of the HTTP connector version."}}},"JWT":{"type":"object","description":"JWT (JSON Web Token) authentication configuration.\n\nUsed when auth.type is \"jwtbearer\" on HTTP connections. The system generates\na signed JWT and uses it as a bearer token for API authentication.\n\n**Signing methods**\n- HMAC (HS256/HS384/HS512): Symmetric signing using a shared secret.\n  Requires the secret field.\n- RSA (RS256/RS384/RS512): Asymmetric signing using an RSA private key.\n  Requires the privateKey field.\n- ECDSA (ES256/ES384/ES512): Asymmetric signing using an elliptic curve private key.\n  Requires the privateKey field.\n\n**Required fields**\n- algorithm (signing algorithm)\n- secret (for HMAC algorithms) OR privateKey (for RSA/ECDSA algorithms)\n- At least one claim (issuer, subject, audience, or customClaims)\n","properties":{"algorithm":{"type":"string","enum":["HS256","HS384","HS512","RS256","RS384","RS512","ES256","ES384","ES512"],"description":"JWT signing algorithm. REQUIRED.\n\nHMAC algorithms (symmetric — use the secret field):\n- \"HS256\" — HMAC with SHA-256 (default, most common).\n- \"HS384\" — HMAC with SHA-384.\n- \"HS512\" — HMAC with SHA-512.\n\nRSA algorithms (asymmetric — use the privateKey field):\n- \"RS256\" — RSA with SHA-256 (most common for RSA).\n- \"RS384\" — RSA with SHA-384.\n- \"RS512\" — RSA with SHA-512.\n\nECDSA algorithms (asymmetric — use the privateKey field):\n- \"ES256\" — ECDSA with P-256 curve and SHA-256.\n- \"ES384\" — ECDSA with P-384 curve and SHA-384.\n- \"ES512\" — ECDSA with P-521 curve and SHA-512.\n","default":"HS256"},"secret":{"type":"string","description":"Shared secret key for HMAC signing algorithms (HS256/HS384/HS512).\nEncrypted at rest. REQUIRED when using an HMAC algorithm.\nNot used with RSA or ECDSA algorithms.\n","writeOnly":true},"privateKey":{"type":"string","description":"Private key in PEM format for RSA or ECDSA signing (encrypted at rest).\nREQUIRED when using an RSA (RS*) or ECDSA (ES*) algorithm.\nNot used with HMAC algorithms.\n","writeOnly":true},"publicKey":{"type":"string","description":"Public key for JWT verification. Optional — used when the API requires\nthe public key to be registered for signature verification.\n"},"issuer":{"type":"string","description":"JWT \"iss\" (issuer) claim. Identifies the principal that issued the JWT.\nTypically a client ID, service account email, or application identifier.\n"},"subject":{"type":"string","description":"JWT \"sub\" (subject) claim. Identifies the subject of the JWT.\nOften the user or service account the token represents.\n"},"audience":{"type":"string","description":"JWT \"aud\" (audience) claim. Identifies the intended recipient of the JWT.\nTypically the API endpoint URL or resource server identifier.\n"},"expiresIn":{"type":"number","description":"JWT expiration time in seconds from the current time.\nThe \"exp\" claim is set to (now + expiresIn). After this time, the JWT is invalid\nand the system generates a new one.\n","default":3600},"notBefore":{"type":"number","description":"JWT \"nbf\" (not before) claim in seconds from the current time.\nThe JWT is not valid before this time.\n"},"issuedAt":{"type":"boolean","description":"When true, includes the \"iat\" (issued at) claim with the current timestamp.\n","default":true},"jwtId":{"type":"boolean","description":"When true, includes a unique \"jti\" (JWT ID) claim for each generated token.\nUseful for preventing token replay attacks.\n","default":false},"customClaims":{"type":"object","description":"Additional custom claims to include in the JWT payload.\nUse for API-specific claims not covered by the standard fields\n(e.g., roles, permissions, custom scopes).\n"},"token":{"type":"string","description":"Pre-generated JWT token value (encrypted at rest).\nWhen set, this static token is used instead of generating one dynamically.\nUse only when the API provides a long-lived JWT that does not need to be regenerated.\n","writeOnly":true}}},"RDBMS":{"type":"object","description":"Configuration for relational database connections. Used when the connection type is \"rdbms\".\n\nCeligo has native support for these database systems:\n- MySQL, MariaDB\n- PostgreSQL\n- Microsoft SQL Server (mssql), Azure Synapse (azuresynapse)\n- Oracle\n- Snowflake\n- Google BigQuery\n- Amazon Redshift\n\n**Required fields**\n- type (always required — selects the database system)\n- host, database, user, password (for most database types)\n- Snowflake: additionally requires snowflake.warehouse and snowflake.authType (\"basic\" or \"keyPair\")\n- BigQuery: requires bigquery.projectId, bigquery.dataset, bigquery.clientEmail, bigquery.privateKey\n- Redshift: requires host, database, user, password; optionally redshift.aws credentials for IAM auth\n- Oracle: uses serviceName instead of database\n\n**Database-specific notes**\n- For Oracle, set serviceName (not database) and optionally serverType and instanceName.\n- For Snowflake, snowflake.authType is REQUIRED: use \"basic\" for password auth, \"keyPair\" for RSA key-pair auth.\n- For MS SQL Server with Azure AD service principal auth, set mssql.authType to \"azure-service-principal\".\n- For BigQuery, use service account credentials (clientEmail + privateKey).\n","required":["type"],"properties":{"type":{"type":"string","enum":["mysql","postgresql","mssql","snowflake","oracle","bigquery","redshift","mariadb","azuresynapse"],"description":"The specific relational database system to connect to. REQUIRED.\n\nThis determines the SQL dialect, connection driver, default port, and\nwhich additional sub-fields are required.\n\n- \"mysql\" — MySQL (default port 3306)\n- \"mariadb\" — MariaDB (default port 3306)\n- \"postgresql\" — PostgreSQL (default port 5432)\n- \"mssql\" — Microsoft SQL Server (default port 1433)\n- \"azuresynapse\" — Azure Synapse Analytics (default port 1433)\n- \"oracle\" — Oracle Database (default port 1521). Uses serviceName instead of database.\n- \"snowflake\" — Snowflake Data Cloud. Requires snowflake.warehouse.\n- \"bigquery\" — Google BigQuery. Requires service account credentials in the bigquery sub-object.\n- \"redshift\" — Amazon Redshift. Optionally uses AWS IAM credentials.\n"},"host":{"type":"string","description":"Database server hostname or IP address.\nREQUIRED for all types except BigQuery (which uses Google's API endpoints).\n\nFor Snowflake, use the account URL format: \"account_identifier.snowflakecomputing.com\".\nFor Redshift, use the cluster endpoint: \"cluster-name.region.redshift.amazonaws.com\".\n"},"port":{"type":"number","description":"Database server port number. If omitted, uses the default port for the database type:\n- MySQL/MariaDB: 3306\n- PostgreSQL: 5432\n- MS SQL/Azure Synapse: 1433\n- Oracle: 1521\n- Snowflake: 443\n- Redshift: 5439\n","minimum":1,"maximum":65535},"database":{"type":"string","description":"Database name to connect to. REQUIRED for most types.\n\nFor Oracle, use the serviceName field instead of database.\nFor BigQuery, use bigquery.dataset to specify the target dataset.\n"},"instanceName":{"type":"string","description":"Named instance identifier. Used by MS SQL Server and Oracle when the server\nhosts multiple database instances. Not required for default instances.\n"},"user":{"type":"string","description":"Database username for authentication.\nREQUIRED for all types except BigQuery (which uses service account auth).\n"},"password":{"type":"string","description":"Database password (encrypted at rest). REQUIRED alongside user for password-based auth.","writeOnly":true},"version":{"type":"string","description":"Database server version string. Used by some drivers for compatibility adjustments."},"serviceName":{"type":"string","description":"Oracle service name. Used instead of the database field for Oracle connections.\nThis is the TNS service name or pluggable database (PDB) service name.\n"},"serverType":{"type":"string","enum":["dedicated","shared","pooled"],"description":"Oracle server connection type. Controls the server process model.\n\n- \"dedicated\" — Each connection gets its own server process. Best for heavy workloads.\n- \"shared\" — Connections share a pool of server processes. More resource-efficient.\n- \"pooled\" — Database Resident Connection Pooling (DRCP). Best for many short-lived connections.\n"},"concurrencyLevel":{"type":"number","description":"Maximum number of concurrent database connections.\nSet based on the database server's connection limit and available resources.\n","minimum":1,"maximum":100,"default":5},"targetConcurrencyLevel":{"type":"number","description":"Target concurrency level for auto-scaling. The system adjusts the number of\nconcurrent connections between 1 and this value based on performance feedback.\n","minimum":1,"maximum":100},"disableStrictSSL":{"type":"boolean","description":"When true, disables strict SSL/TLS certificate validation for the database connection.\nOnly use for development/testing with self-signed certificates.\n","default":false},"snowflake":{"type":"object","description":"Snowflake-specific configuration. REQUIRED when type is \"snowflake\".\n","required":["warehouse","authType"],"properties":{"warehouse":{"type":"string","description":"Snowflake virtual warehouse name. REQUIRED for Snowflake connections.\nThe warehouse provides compute resources for queries. Must be a warehouse\nthe user role has access to.\n"},"schema":{"type":"string","description":"Default Snowflake schema. If omitted, queries must fully-qualify table names\n(e.g., DATABASE.SCHEMA.TABLE).\n"},"role":{"type":"string","description":"Snowflake security role to use for the session. Determines which databases,\nschemas, and warehouses are accessible. Defaults to the user's default role.\n"},"authType":{"type":"string","enum":["basic","keyPair"],"description":"Authentication type for Snowflake. REQUIRED.\n\n- \"basic\" — Username/password authentication. Use for standard username/password connections.\n- \"keyPair\" — Key-pair authentication using RSA private key.\n  When using keyPair, provide the RSA private key in the connection's key field.\n\nWhen creating placeholder/dummy connections, use \"basic\".\n"}}},"mssql":{"type":"object","description":"Microsoft SQL Server / Azure Synapse-specific configuration.\n","properties":{"authType":{"type":"string","enum":["basic","azure-service-principal"],"description":"Authentication type for MS SQL Server.\n\n- \"basic\" — Standard SQL Server authentication with username/password (default).\n- \"azure-service-principal\" — Azure Active Directory service principal authentication.\n  Requires an iClient configuration with the service principal credentials.\n","default":"basic"}}},"bigquery":{"type":"object","description":"Google BigQuery-specific configuration. REQUIRED when type is \"bigquery\".\nUses Google Cloud service account credentials for authentication.\n","properties":{"projectId":{"type":"string","description":"Google Cloud project ID that contains the BigQuery datasets. REQUIRED.\nFound in the Google Cloud Console project settings.\n"},"dataset":{"type":"string","description":"Default BigQuery dataset name. REQUIRED.\nQueries will target tables within this dataset unless fully-qualified names are used.\n"},"clientEmail":{"type":"string","description":"Google Cloud service account email address. REQUIRED.\nFormat: \"service-account-name@project-id.iam.gserviceaccount.com\".\nThe service account must have BigQuery Data Editor and BigQuery Job User roles.\n"},"privateKey":{"type":"string","description":"Google Cloud service account private key in PEM format (encrypted at rest). REQUIRED.\nDownloaded as part of the service account JSON key file.\n","writeOnly":true}}},"redshift":{"type":"object","description":"Amazon Redshift-specific configuration. AWS credentials are optional —\nonly needed for IAM-based authentication instead of standard username/password.\n","properties":{"aws":{"type":"object","description":"AWS credentials for IAM-based Redshift authentication.","properties":{"accessKeyId":{"type":"string","description":"AWS access key ID for IAM authentication."},"secretAccessKey":{"type":"string","description":"AWS secret access key (encrypted at rest).","writeOnly":true}}},"clusterIdentifier":{"type":"string","description":"Redshift cluster identifier. Required for IAM-based authentication."},"region":{"type":"string","description":"AWS region where the Redshift cluster is deployed. Required for IAM-based authentication.\n","enum":["us-east-1","us-east-2","us-west-1","us-west-2","eu-west-1","eu-west-2","eu-west-3","eu-central-1","ap-southeast-1","ap-southeast-2","ap-northeast-1","ap-northeast-2","ap-south-1","sa-east-1","ca-central-1"]}}},"ssl":{"$ref":"#/components/schemas/SSL"},"options":{"type":"array","description":"Additional database driver connection options as name/value pairs.\nUse for driver-specific settings not covered by the standard fields\n(e.g., connection timeout, charset, application name).\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Option name (driver-specific)."},"value":{"type":"string","description":"Option value."}},"required":["name","value"]}}}},"MongoDB":{"type":"object","description":"Configuration for MongoDB connections. Used when the connection type is \"mongodb\".\n\nSupports standalone MongoDB instances, replica sets, and MongoDB Atlas clusters.\n\n**Required fields**\n- host (array of one or more server addresses)\n- database (target database name)\n\n**Authentication**\nUsername and password are optional — only required when the MongoDB server has\nauthentication enabled (which is recommended for production).\nUse authSource to specify the authentication database if it differs from the target database.\n\n**Replica sets**\nFor replica sets, provide all member hostnames in the host array and set\nthe replicaSet field to the replica set name.\n","required":["host","database"],"properties":{"host":{"type":"array","items":{"type":"string"},"description":"MongoDB server addresses. REQUIRED. An array of one or more host:port strings.\n\n- Standalone: [\"mongodb.example.com:27017\"]\n- Replica set: [\"rs1.example.com:27017\", \"rs2.example.com:27017\", \"rs3.example.com:27017\"]\n- MongoDB Atlas: [\"cluster0-shard-00-00.abc.mongodb.net:27017\", ...]\n\nInclude the port number with each host. Default MongoDB port is 27017.\n"},"database":{"type":"string","description":"Target MongoDB database name. REQUIRED.\nAll operations (reads/writes) target collections within this database.\n"},"username":{"type":"string","description":"MongoDB username for authentication.\nRequired when the MongoDB server has authentication enabled.\n"},"password":{"type":"string","description":"MongoDB password (encrypted at rest). Required alongside username.","writeOnly":true},"replicaSet":{"type":"string","description":"MongoDB replica set name. Required when connecting to a replica set.\nThe driver uses this to discover all replica set members and handle failover.\nFor MongoDB Atlas, this is typically \"atlas-xxxxxx-shard-0\".\n"},"ssl":{"type":"boolean","description":"When true, connects to MongoDB over TLS/SSL.\nRequired for MongoDB Atlas and recommended for all production deployments.\n","default":false},"authSource":{"type":"string","description":"MongoDB authentication database — the database where the user credentials are stored.\nDefaults to the value of the database field. Set to \"admin\" if the user was created\nin the admin database (common for shared MongoDB deployments and Atlas).\n"},"concurrencyLevel":{"type":"number","description":"Maximum number of concurrent MongoDB operations.\n","minimum":1,"maximum":100,"default":5},"targetConcurrencyLevel":{"type":"number","description":"Target concurrency level for auto-scaling. The system adjusts concurrency\nbetween 1 and this value based on performance feedback.\n","minimum":1,"maximum":100}}},"AS2":{"type":"object","description":"AS2 (Applicability Statement 2) connection configuration for EDI","required":["as2Id","partnerId"],"properties":{"as2Id":{"type":"string","description":"AS2 identifier for this station. Trading partners use this as the \"To\"\nidentifier when sending documents, and integrator.io uses it as the \"From\"\nidentifier when sending documents to partners.\n\nIMPORTANT: This value must be unique across ALL integrator.io users to\nensure inbound documents are routed correctly. It CANNOT be updated after\ncreation. Use a different identifier for each environment (e.g., production vs. non-production).\n\nIf not provided, a unique value will be auto-generated.\n"},"partnerId":{"type":"string","description":"AS2 partner identifier"},"_tpConnectorId":{"type":"string","format":"objectId","description":"Reference to a trading partner connector"},"contentBasedFlowRouter":{"type":"object","description":"Content-based routing configuration","properties":{"function":{"type":"string","description":"Name of the routing function"},"_scriptId":{"type":"string","format":"objectId","description":"Reference to script containing the routing function"}}},"partnerStationInfo":{"type":"object","description":"Partner (remote) station configuration — used on the IMPORT side.\nControls how messages are sent TO the trading partner.\n","required":["as2URI","signing","encryptionType"],"properties":{"as2URI":{"type":"string","format":"uri","description":"Partner's AS2 endpoint URI"},"mdn":{"type":"object","description":"MDN (Message Disposition Notification) settings","required":["mdnSigning"],"properties":{"mdnURL":{"type":"string","description":"URL for asynchronous MDN"},"signatureProtocol":{"type":"string","enum":["pkcs7-signature"],"description":"MDN signature protocol"},"mdnSigning":{"type":"string","enum":["NONE","SHA1","MD5","SHA256"],"description":"MDN signing algorithm"},"verifyMDNSignature":{"type":"boolean","description":"Whether to verify the MDN signature from the partner"}}},"auth":{"type":"object","description":"Authentication for the partner AS2 endpoint","properties":{"type":{"type":"string","enum":["basic","token"],"description":"Authentication type"},"failStatusCode":{"type":"number","description":"HTTP status code indicating auth failure"},"failPath":{"type":"string","description":"JSON path to check for auth failure"},"failValues":{"type":"array","items":{"type":"string"},"description":"Values at failPath that indicate auth failure"},"basic":{"type":"object","description":"Basic auth credentials","properties":{"username":{"type":"string","description":"Username for basic auth"},"password":{"type":"string","writeOnly":true,"description":"Password for basic auth (encrypted)"}}},"token":{"type":"object","description":"Token-based auth configuration","properties":{"refreshHeaders":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}},"description":"Headers for token refresh requests"},"refreshToken":{"type":"string","writeOnly":true,"description":"Refresh token (encrypted)"}}}}},"rateLimit":{"type":"object","description":"Rate limiting configuration","properties":{"failStatusCode":{"type":"number"},"failPath":{"type":"string"},"failValues":{"type":"array","items":{"type":"string"}},"limit":{"type":"number","minimum":1,"description":"Maximum requests per time window"}}},"SMIMEVersion":{"type":"string","enum":["v2","v3"],"description":"S/MIME version (not typically exposed on UI)"},"signing":{"type":"string","enum":["NONE","SHA1","MD5","SHA256"],"description":"Message signing algorithm for outbound messages to partner"},"encryptionType":{"type":"string","enum":["NONE","DES","RC2","3DES","AES128","AES256"],"description":"Message encryption type for outbound messages to partner"},"encoding":{"type":"string","enum":["base64","binary"],"description":"Content transfer encoding (default base64)"},"signatureEncoding":{"type":"string","enum":["base64","binary"],"description":"Signature encoding format"}}},"userStationInfo":{"type":"object","description":"User (local) station configuration — used on the EXPORT side.\nControls how inbound messages from the partner are processed.\n","required":["signing","encryptionType"],"properties":{"mdn":{"type":"object","description":"MDN settings for inbound messages","required":["mdnSigning"],"properties":{"mdnURL":{"type":"string","description":"URL for asynchronous MDN"},"signatureProtocol":{"type":"string","enum":["pkcs7-signature"],"description":"MDN signature protocol"},"mdnSigning":{"type":"string","enum":["NONE","SHA1","MD5","SHA256"],"description":"MDN signing algorithm"},"mdnEncoding":{"type":"string","enum":["base64","binary"],"description":"MDN encoding format"}}},"signing":{"type":"string","enum":["NONE","SHA1","MD5","SHA256"],"description":"Message signing algorithm for inbound message verification"},"encryptionType":{"type":"string","enum":["NONE","DES","RC2","3DES","AES128","AES256"],"description":"Message encryption type for inbound message decryption"},"encoding":{"type":"string","enum":["base64","binary"],"description":"Content transfer encoding (default binary)"},"compressed":{"type":"boolean","description":"Whether to compress messages","default":false}}},"encrypted":{"type":"object","description":"Encrypted certificate and key data (stored encrypted at rest).\nRequired when signing or encryption is enabled (not NONE).\nAuto-generated self-signed certificates will be injected if not provided.\n","properties":{"userPrivateKey":{"type":"string","writeOnly":true,"description":"PEM-encoded X.509 private key for this station.\nRequired when partnerStationInfo.signing != NONE or userStationInfo.encryptionType != NONE.\n"}}},"unencrypted":{"type":"object","description":"Unencrypted certificate data for identity and partner verification.\nRequired when signing or encryption is enabled (not NONE).\nAuto-generated self-signed certificates will be injected if not provided.\n","properties":{"userPublicKey":{"type":"string","description":"PEM-encoded X.509 public certificate for this station.\nRequired when partnerStationInfo.signing != NONE or userStationInfo.encryptionType != NONE.\n"},"partnerCertificate":{"type":"string","description":"PEM-encoded X.509 certificate for the trading partner.\nRequired when partnerStationInfo.encryptionType != NONE or userStationInfo.signing != NONE,\nor when partnerStationInfo.mdn.verifyMDNSignature is true.\n"}}},"concurrencyLevel":{"type":"number","description":"Maximum number of concurrent operations","minimum":1,"maximum":10,"default":1},"targetConcurrencyLevel":{"type":"number","description":"Target concurrency level for auto-scaling","minimum":1,"maximum":10},"preventCanonicalization":{"type":"boolean","description":"Prevent canonicalization of message content","default":false}}},"Filesystem":{"type":"object","description":"Configuration for filesystem connections. Used when the connection type is \"filesystem\".\n\nFilesystem connections provide access to the local filesystem on a Celigo on-premise Agent.\nThis enables file-based integrations with directories on the agent's host machine or\nmounted network drives.\n\n**Agent requirement**\nA Celigo Agent is REQUIRED for filesystem connections (set _agentId on the parent\nconnection object). The agent provides the filesystem access — cloud-only\ndeployments cannot use this connection type.\n\n**Ping**\nSet ping.directoryPath to a directory the agent can access to verify the connection.\n","properties":{"ping":{"type":"object","description":"Health check configuration for validating filesystem access.","properties":{"directoryPath":{"type":"string","description":"Absolute directory path on the agent's host used for connection health checks.\nThe system verifies the directory exists and is accessible.\nExample: \"/data/integrations/incoming\" or \"C:\\\\Data\\\\Integrations\".\n"}}},"concurrencyLevel":{"type":"number","description":"Maximum number of concurrent filesystem operations.\n","minimum":1,"maximum":100},"targetConcurrencyLevel":{"type":"number","description":"Target concurrency level for auto-scaling.\n","minimum":1,"maximum":100}}},"DynamoDB":{"type":"object","description":"Configuration for Amazon DynamoDB connections. Used when the connection type is \"dynamodb\".\n\nProvides access to DynamoDB tables for NoSQL data integrations.\nUses AWS IAM credentials for authentication.\n\n**Required fields**\n- aws.accessKeyId\n- aws.secretAccessKey\n\nThe IAM credentials must have appropriate DynamoDB permissions\n(dynamodb:GetItem, dynamodb:PutItem, dynamodb:Query, dynamodb:Scan, etc.).\n","required":["aws"],"properties":{"aws":{"type":"object","description":"AWS IAM credentials for DynamoDB authentication. REQUIRED.","properties":{"accessKeyId":{"type":"string","description":"AWS access key ID for IAM authentication.\nFrom an IAM user or role with DynamoDB access permissions.\n"},"secretAccessKey":{"type":"string","description":"AWS secret access key (encrypted at rest).","writeOnly":true}}},"concurrencyLevel":{"type":"number","description":"Maximum number of concurrent DynamoDB operations.\nDynamoDB throughput is governed by table-level read/write capacity units.\n","minimum":1,"maximum":100,"default":5},"targetConcurrencyLevel":{"type":"number","description":"Target concurrency level for auto-scaling. The system adjusts concurrency\nbetween 1 and this value based on throughput feedback.\n","minimum":1,"maximum":100}}},"JDBC":{"type":"object","description":"Configuration for JDBC (Java Database Connectivity) connections.\nUsed when the connection type is \"jdbc\".\n\nJDBC connections provide access to databases through Java-based JDBC drivers,\nincluding databases not natively supported by Celigo's RDBMS connector.\n\n**When to use jdbc vs rdbms**\n- Use RDBMS for: MySQL, PostgreSQL, MS SQL, Snowflake, Oracle, BigQuery, Redshift, MariaDB.\n- Use JDBC for: NetSuite (SuiteAnalytics), Databricks, DB2, Workday, and other databases\n  accessible via a JDBC driver deployed on a Celigo Agent.\n\n**Required fields**\n- type (always required — selects the JDBC driver)\n- host, database, user, password (for most types)\n- For Oracle: use serviceName instead of database\n- For wallet auth: set authType to \"wallet\" and provide the wallet file\n\n**Agent requirement**\nMost JDBC connections require a Celigo on-premise Agent (set _agentId on the connection)\nbecause the JDBC driver runs on the agent, not in the cloud.\n","required":["type"],"properties":{"type":{"type":"string","enum":["agent","netsuitejdbc","databricks","oracle:thin","sqlserver","activedirectory","db2","workday"],"description":"JDBC driver/connection type. REQUIRED.\n\n- \"agent\" — Generic JDBC driver deployed on a Celigo Agent. Requires driverPath.\n- \"netsuitejdbc\" — NetSuite SuiteAnalytics JDBC driver for reporting and analytics queries.\n- \"databricks\" — Databricks SQL/Spark via JDBC.\n- \"oracle:thin\" — Oracle Database via the Oracle Thin JDBC driver.\n- \"sqlserver\" — Microsoft SQL Server via the JTDS or Microsoft JDBC driver.\n- \"activedirectory\" — SQL Server with Active Directory authentication.\n- \"db2\" — IBM DB2 database.\n- \"workday\" — Workday via JDBC (for report/data access).\n"},"version":{"type":"string","description":"JDBC driver version. Used for driver compatibility when multiple versions are available."},"host":{"type":"string","description":"Database server hostname or IP address.\nFor NetSuite JDBC, use the SuiteAnalytics Connect hostname\n(e.g., \"account-id.connect.api.netsuite.com\").\n"},"port":{"type":"number","description":"Database server port number. Default varies by driver type."},"database":{"type":"string","description":"Database or catalog name. For Oracle, use the serviceName field instead.\n"},"user":{"type":"string","description":"Database username for authentication."},"password":{"type":"string","description":"Database password (encrypted at rest).","writeOnly":true},"serviceName":{"type":"string","description":"Oracle service name. Used instead of the database field for Oracle JDBC connections.\nThis is the TNS service name or pluggable database (PDB) service name.\n"},"authType":{"type":"string","enum":["customjdbc","wallet"],"description":"Authentication method for the JDBC connection.\n\n- \"customjdbc\" — Standard username/password authentication (default).\n- \"wallet\" — Oracle Wallet authentication. Provide the wallet file in the wallet field.\n  The wallet contains encrypted credentials so username/password are not needed separately.\n"},"wallet":{"type":"string","description":"Oracle Wallet file contents (encrypted at rest).\nREQUIRED when authType is \"wallet\". Contains the auto-login wallet (cwallet.sso)\nwith encrypted credentials for passwordless Oracle authentication.\n","writeOnly":true},"driverPath":{"type":"string","description":"File path to the JDBC driver JAR on the Celigo Agent.\nREQUIRED when type is \"agent\" (generic JDBC).\nThe driver must be deployed on the agent before creating the connection.\n"},"properties":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}},"description":"Additional JDBC connection properties as name/value pairs.\nThese are passed directly to the JDBC driver as connection properties.\nUse for driver-specific settings like SSL mode, connection timeout,\napplication name, etc.\n"},"concurrencyLevel":{"type":"number","description":"Maximum number of concurrent database connections.\nJDBC connections often run through a single Agent, so keep this value\nconservative to avoid overwhelming the Agent or database.\n","minimum":1,"maximum":100,"default":5},"targetConcurrencyLevel":{"type":"number","description":"Target concurrency level for auto-scaling. The system adjusts the number of\nconcurrent connections between 1 and this value based on performance feedback.\n","minimum":1,"maximum":100}}},"VAN":{"type":"object","description":"Configuration for VAN (Value Added Network) connections.\nUsed when the connection type is \"van\".\n\nVAN connections provide EDI document exchange through a Value Added Network provider.\nDocuments are sent to and received from a VAN mailbox identified by the mailboxId.\n\nThe as2Id field links this VAN connection to an AS2 identity for document routing,\nand contentBasedFlowRouter enables routing inbound documents to different flows\nbased on document content.\n","properties":{"mailboxId":{"type":"number","description":"VAN mailbox identifier. Identifies the specific mailbox on the VAN provider\nthat this connection sends to and receives from.\n"},"as2Id":{"type":"string","description":"AS2 identifier for this VAN station. Used as the \"From\" identifier when sending\ndocuments and the \"To\" identifier when receiving. Must be unique across all\nintegrator.io users. Cannot be changed after creation.\n"},"contentBasedFlowRouter":{"type":"object","description":"Content-based routing configuration for inbound VAN documents.\nRoutes incoming documents to different flows based on document content\nusing a custom JavaScript function.\n","properties":{"function":{"type":"string","description":"Name of the JavaScript routing function in the referenced script."},"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource containing the routing function."}}}}},"MCP":{"type":"object","description":"Configuration for MCP (Model Context Protocol) connections.\nUsed when the connection type is \"mcp\".\n\nMCP connections enable Celigo to call tools exposed by an external MCP server\nover HTTP. The MCP server provides a set of callable tools that can be invoked\nduring flow execution.\n\n**Required fields**\n- serverURL (the MCP server's HTTP endpoint)\n- http.auth.type (must be one of: \"token\", \"oauth\", or \"custom\")\n\n**Authentication**\nMCP connections authenticate via the http sub-object, which reuses the same\nauth configuration as HTTP connections but with a restricted set of auth types.\nOnly \"token\", \"oauth\", and \"custom\" are valid for MCP connections.\n","properties":{"protocol":{"type":"string","enum":["http"],"description":"Transport protocol for communicating with the MCP server.\nCurrently only \"http\" is supported.\n"},"serverURL":{"type":"string","description":"MCP server endpoint URL. REQUIRED.\nMust be a valid absolute URL (e.g., \"https://mcp-server.example.com/mcp\").\nThe system validates this as a proper URL.\n","format":"uri"},"timeout":{"type":"number","description":"Request timeout in milliseconds for MCP tool invocations.\nIf the MCP server does not respond within this time, the request fails.\n"},"allowedTools":{"type":"array","description":"Optional allowlist of MCP tool names that this connection may invoke.\nWhen set, only tools in this list can be called. When omitted or empty,\nall tools exposed by the MCP server are available.\n","items":{"type":"string"}},"http":{"type":"object","description":"HTTP transport configuration for the MCP connection, including authentication and headers.\n\nThe auth.type MUST be one of: \"token\", \"oauth\", or \"custom\".\nOther auth types (basic, cookie, jwt, etc.) are not supported for MCP connections.\n","properties":{"_iClientId":{"type":"string","format":"objectId","description":"Reference to an OAuth iClient for OAuth-based MCP authentication.\nRequired when http.auth.type is \"oauth\".\n"},"auth":{"$ref":"#/components/schemas/auth"},"headers":{"type":"array","description":"Default HTTP headers included in every request to the MCP server.\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Header name."},"value":{"type":"string","description":"Header value."}},"required":["name","value"]}},"unencrypted":{"type":"object","description":"Unencrypted custom fields for non-sensitive MCP configuration."},"encrypted":{"type":"object","description":"Encrypted custom fields for sensitive MCP configuration (encrypted at rest)."}}}}},"auth":{"type":"object","description":"Authentication configuration for the API connection.\n\nThe auth.type field selects the authentication strategy. Each type requires\nspecific sub-fields — see the type field description for details.\n","properties":{"type":{"type":"string","enum":["custom","basic","token","oauth","wsse","cookie","jwt","digest","specific","oauth1","jwtbearer"],"description":"Authentication method for this connection. Determines which auth sub-fields are required.\n\n- \"basic\" — HTTP Basic Auth. Requires: auth.basic.username and auth.basic.password.\n- \"token\" — API key or bearer token. Requires: auth.token with token value and location.\n  Supports automatic token refresh via refreshMethod/refreshRelativeURI.\n- \"oauth\" — OAuth 2.0. Requires: auth.oauth with grant type, URIs, and credentials.\n  Supports authorization code, client credentials, and password grant flows.\n- \"jwtbearer\" — JWT Bearer Token. Requires: auth.jwt with signatureMethod and payload.\n  HMAC methods need a secret; RSA/EC methods need a privateKey.\n- \"cookie\" — Cookie-based session auth. Requires: auth.cookie.uri (login endpoint).\n- \"digest\" — HTTP Digest Auth. Requires: auth.basic.username and auth.basic.password.\n- \"oauth1\" — OAuth 1.0a. Requires: auth.oauth.oauth1 with consumer key/secret and tokens.\n- \"custom\" — No built-in auth handling. Put credentials in headers or encrypted fields.\n- \"wsse\" — WS-Security. Requires: auth.basic.username and auth.basic.password.\n- \"specific\" — Platform-specific auth (e.g., PTX).\n- \"jwt\" — Legacy JWT auth. Prefer \"jwtbearer\" for new connections.\n"},"failStatusCode":{"type":"number","description":"HTTP status code that indicates an authentication failure (e.g., 401, 403).\nWhen this status code is received, the system triggers re-authentication\nbefore retrying the request.\n"},"failPath":{"type":"string","description":"JSON path or XPath expression to check in response bodies for authentication failure indicators.\nUsed when APIs return 200 OK but embed auth errors in the response body.\n"},"failValues":{"type":"array","items":{"type":"string"},"description":"Values at failPath that indicate an authentication failure.\nRequires failPath to be set.\n"},"skipFollowAuthorizationHeader":{"type":"boolean","description":"When true, the Authorization header is NOT forwarded on HTTP redirects.\nEnable this for APIs that redirect to a different domain after authentication.\n"},"basic":{"type":"object","description":"Basic authentication credentials. REQUIRED when auth.type is \"basic\", \"wsse\", or \"digest\".\n","properties":{"username":{"type":"string","description":"Username for Basic/Digest/WSSE authentication."},"password":{"type":"string","description":"Password for Basic/Digest/WSSE authentication (encrypted at rest).","writeOnly":true}}},"token":{"type":"object","description":"Token-based authentication configuration. REQUIRED when auth.type is \"token\".\n\nSupports static API keys/bearer tokens and automatic token refresh flows.\nThe token can be sent in a header (Authorization), query parameter, or request body.\n","properties":{"token":{"type":"string","description":"The API key or bearer token value (encrypted at rest).\nREQUIRED unless automatic token refresh is configured.\n","writeOnly":true},"location":{"type":"string","enum":["url","header","body"],"description":"Where to include the token in outbound requests.\n- \"header\" — Sent in an HTTP header (most common). Use headerName and scheme to control format.\n- \"url\" — Sent as a URL query parameter. Use paramName to set the parameter name.\n- \"body\" — Included in the request body.\n"},"headerName":{"type":"string","description":"HTTP header name for the token when location is \"header\".\nDefaults to \"Authorization\" if omitted.\n"},"scheme":{"type":"string","description":"Token scheme/prefix when sent in a header. Prepended before the token value.\nCommon values: \"Bearer\", \"Token\", \"Basic\".\nExample: scheme \"Bearer\" produces header \"Authorization: Bearer <token>\".\n"},"paramName":{"type":"string","description":"Query parameter name for the token when location is \"url\".\nExample: paramName \"api_key\" produces URL \"?api_key=<token>\".\n"},"refreshMethod":{"type":"string","enum":["GET","POST","PUT"],"description":"HTTP method for automatic token refresh requests.\nREQUIRED when no static token is provided (refresh-based auth flow).\n","default":"POST"},"refreshRelativeURI":{"type":"string","description":"Relative URI (appended to baseURI) for the token refresh endpoint.\nThe system calls this endpoint to obtain a new token when the current one expires.\n"},"refreshBody":{"type":"string","description":"Request body to send with the token refresh request."},"refreshMediaType":{"type":"string","enum":["json","urlencoded","xml","plaintext"],"description":"Content type for the token refresh request body.\n","default":"urlencoded"},"refreshResponseMediaType":{"type":"string","enum":["json","xml"],"description":"Expected content type of the token refresh response."},"refreshTokenPath":{"type":"string","description":"JSON path to extract the new token from the refresh response.\nExample: \"access_token\" or \"data.token\".\n"},"refreshToken":{"type":"string","description":"Refresh token used to obtain a new access token (encrypted at rest).","writeOnly":true},"refreshTokenLocation":{"type":"string","enum":["header","body"],"description":"Where to include the refresh token in refresh requests."},"refreshHeaders":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}},"description":"Additional headers to include in token refresh requests."},"tokenPaths":{"type":"array","items":{"type":"string"},"description":"JSON paths to extract multiple token values from the refresh response.\nUse when the refresh response contains tokens at different paths that need\nto be stored for subsequent requests.\n"}}},"oauth":{"$ref":"#/components/schemas/OAuth"},"cookie":{"type":"object","description":"Cookie-based session authentication. REQUIRED when auth.type is \"cookie\".\n\nThe system authenticates by sending a request to the login URI, captures the\nsession cookies from the response, and includes them in all subsequent API requests.\n","properties":{"uri":{"type":"string","description":"Login endpoint URI for cookie authentication. REQUIRED.\nThe system sends a request to this URI to obtain session cookies.\n"},"body":{"type":"string","description":"Request body for the login request (e.g., JSON with username/password)."},"method":{"type":"string","description":"HTTP method for the login request (typically POST)."},"successStatusCode":{"type":"number","description":"HTTP status code that confirms successful authentication.\nIf the login response returns this status code, the session cookies are captured.\n"}}},"jwt":{"$ref":"#/components/schemas/JWT"}}},"Wrapper":{"type":"object","description":"Configuration for Wrapper connections. Used when the connection type is \"wrapper\".\n\nWrapper connections are custom-built connectors deployed on a Celigo Stack.\nThey allow developers to implement custom authentication, data transformation,\nand API interaction logic using server-side JavaScript on a dedicated stack.\n\n**When to use**\nUse wrapper connections when:\n- The target API has non-standard authentication not supported by HTTP connections.\n- Custom server-side logic is needed for connection management.\n- You need a fully custom connector deployed on a Celigo Stack.\n\n**Required fields**\n- pingFunction (name of the function that tests connection health)\n- _stackId (set on the parent connection object — references the Celigo Stack)\n\n**Custom fields**\nUse encrypted/unencrypted fields to store connection-specific configuration.\nThe wrapper code on the Stack accesses these values at runtime.\n","required":["pingFunction"],"properties":{"pingFunction":{"type":"string","description":"Name of the JavaScript function on the Stack that tests connection health.\nThis function is called when the user clicks \"Test Connection\" in the UI.\nIt should verify that the connection credentials are valid and the target\nsystem is reachable.\n"},"unencrypted":{"type":"object","description":"Unencrypted custom fields for non-sensitive configuration values.\nThese values are accessible to the wrapper code on the Stack at runtime.\nField definitions are specified in unencryptedFields.\n"},"unencryptedFields":{"type":"array","description":"Metadata defining the unencrypted custom fields on this connection.","items":{"type":"object","properties":{"id":{"type":"string","description":"Field identifier — matches the key in the unencrypted object."},"label":{"type":"string","description":"Human-readable label shown in the UI."},"required":{"type":"boolean","default":false},"position":{"type":"number","description":"Display order in the UI form."},"helpText":{"type":"string","description":"Tooltip text shown next to the field in the UI."},"type":{"type":"string","description":"Field type hint for the UI (e.g., \"text\", \"select\")."}}}},"encrypted":{"type":"object","description":"Encrypted custom fields for sensitive configuration values (API keys, passwords, etc.).\nValues are encrypted at rest and decrypted only on the Stack at runtime.\nField definitions are specified in encryptedFields.\n"},"encryptedFields":{"type":"array","description":"Metadata defining the encrypted custom fields on this connection.","items":{"type":"object","properties":{"id":{"type":"string","description":"Field identifier — matches the key in the encrypted object."},"label":{"type":"string","description":"Human-readable label shown in the UI."},"required":{"type":"boolean","default":false},"position":{"type":"number","description":"Display order in the UI form."},"helpText":{"type":"string","description":"Tooltip text shown next to the field in the UI."},"type":{"type":"string","description":"Field type hint for the UI (e.g., \"text\", \"select\")."}}}},"_stackId":{"type":"string","format":"objectId","description":"Reference to the stack this wrapper connection uses."},"concurrencyLevel":{"type":"number","description":"Maximum number of concurrent operations through this wrapper connection.\n","minimum":1,"maximum":100,"default":5},"targetConcurrencyLevel":{"type":"number","description":"Target concurrency level for auto-scaling. The system adjusts concurrency\nbetween 1 and this value based on performance feedback.\n","minimum":1,"maximum":100}}},"ResourceResponse":{"type":"object","description":"Core response fields shared by all Celigo resources","properties":{"_id":{"type":"string","format":"objectId","readOnly":true,"description":"Unique identifier for the resource.\n\nThe _id is used in:\n- API endpoints that operate on a specific resource (e.g., GET, PUT, DELETE)\n- References from other resources (e.g., flows that use this resource)\n- Job history and error tracking\n\nFormat: 24-character hexadecimal string\n"},"createdAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was initially created.\n\nThis read-only field is automatically set during resource creation and cannot\nbe modified. It provides an audit trail for when the resource was first added\nto the system, which can be useful for:\n\n- Resource lifecycle management\n- Audit and compliance reporting\n- Troubleshooting integration timelines\n- Identifying older resources that may need review\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"},"lastModified":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was most recently updated.\n\nThis read-only field is automatically updated whenever any property of the\nresource is modified. It provides an audit trail that can be used for:\n\n- Determining if a resource has changed since it was last reviewed\n- Monitoring configuration changes during troubleshooting\n- Implementing cache invalidation strategies\n- Synchronizing related resources based on modification time\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix)\nand will always be equal to or later than the createdAt timestamp.\n"},"deletedAt":{"type":["string","null"],"format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was marked for deletion.\n\nWhen this field is present and contains a valid timestamp, it indicates\nthat the resource has been soft-deleted (moved to the recycle bin) but not\nyet permanently removed from the system. This allows for recovery of\naccidentally deleted resources within a specified retention period.\n\nThe deletedAt timestamp enables:\n- Filtering deleted resources from active resource listings\n- Implementing time-based retention policies for permanent deletion\n- Tracking deletion events for audit and compliance purposes\n- Resource recovery workflows with clear timeframes\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\nWhen null or absent, the resource is considered active.\n"}},"required":["_id"]},"IAResourceResponse":{"type":"object","description":"Integration app response fields for resources that are part of integration apps","properties":{"_integrationId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the specific integration instance that contains this resource.\n\nThis field is only populated for resources that are part of an integration app\ninstallation. It contains the unique identifier (_id) of the integration\nresource that was installed in the account.\n\nThe integration instance represents a specific installed instance of an\nintegration app, with its own configuration, settings, and runtime environment.\n\nThis reference enables:\n- Tracing the resource back to its parent integration instance\n- Permission and access control based on integration ownership\n- Lifecycle management (enabling/disabling, updating, or uninstalling)\n"},"_connectorId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the integration app that defines this resource.\n\nThis field is only populated for resources that are part of an integration app.\nIt contains the unique identifier (_id) of the integration app (connector)\nthat defines the structure, behavior, and templates for this resource.\n\nThe integration app is the published template that can be installed\nmultiple times across different accounts, with each installation creating\na separate integration instance (referenced by _integrationId).\n\nThis reference enables:\n- Identifying the source integration app for this resource\n- Determining which template version is being used\n- Linking to documentation, support, and marketplace information\n"}}},"AIDescription":{"type":"object","description":"AI-generated descriptions and documentation for the resource.\n\nThis object contains automatically generated content that helps users\nunderstand the purpose, behavior, and configuration of the resource without\nrequiring them to analyze the technical details. The AI-generated content\nis sanitized and safe for display in the UI.\n","properties":{"summary":{"type":"string","description":"Brief AI-generated summary of the resource's purpose and functionality.\n\nThis concise description provides a quick overview of what the resource does,\nwhat systems it interacts with, and its primary role in the integration.\nThe summary is suitable for display in list views, dashboards, and other\ncontexts where space is limited.\n\nMaximum length: 10KB\n"},"detailed":{"type":"string","description":"Comprehensive AI-generated description of the resource's functionality.\n\nThis detailed explanation covers the resource's purpose, configuration details,\ndata flow patterns, filtering logic, and other technical aspects. It provides\nin-depth information suitable for documentation, tooltips, or detailed views\nin the administration interface.\n\nThe content may include HTML formatting for improved readability.\n\nMaximum length: 10KB\n"},"generatedOn":{"type":"string","format":"date-time","description":"Timestamp indicating when the AI description was generated.\n\nThis field helps track the freshness of the AI-generated content and\ndetermine when it might need to be regenerated due to changes in the\nresource's configuration or behavior.\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"}}},"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"404-not-found":{"description":"Not found. The requested resource does not exist or is not visible to the caller.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}}},"paths":{"/v1/tools/{_id}/connections":{"get":{"summary":"List connections a tool depends on","operationId":"listToolConnections","tags":["Tools"],"description":"Returns the full Connection resources the tool references — both directly\n(via the tool's own `_connectionId` fields on its steps) and transitively\nthrough descendant resources (inner tools, lookups, imports, exports).\n\nReturns `200` with `[]` when the tool has no connection dependencies.\n\nAI guidance:\n- Use this to discover *what systems* a tool talks to before cloning,\n  moving, or evaluating blast radius of a connection change.\n- For the full dependency tree (imports, exports, nested tools), use\n  `GET /v1/tools/{_id}/descendants` instead — that endpoint returns the\n  actual resource docs, grouped by type.","parameters":[{"name":"_id","in":"path","required":true,"description":"Tool id.","schema":{"type":"string","format":"objectId"}}],"responses":{"200":{"description":"Array of full Connection objects referenced by the tool and its\ndescendants. Empty array when no connections are referenced.","content":{"application/json":{"schema":{"type":"array","items":{"$ref":"#/components/schemas/Response-2"}}}}},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"$ref":"#/components/responses/404-not-found"}}}}}}
```

## List resources a tool depends on, grouped by type

> Returns the full dependency tree of a tool as three arrays: the\
> \`imports\`, \`exports\`, and nested \`tools\` it references directly or\
> transitively. Each entry is the complete resource document (not just\
> an id), so the caller doesn't need to fan out individual GETs.\
> \
> Returns \`200\` with \`{imports:\[], exports:\[], tools:\[]}\` when the tool\
> has no dependencies.\
> \
> AI guidance:\
> \- Pair with \`GET /v1/tools/{\_id}/connections\` to enumerate the full\
> &#x20; resource + connection footprint in two calls.\
> \- Useful for impact analysis (what breaks if this import changes?),\
> &#x20; template/clone previews, and orphaned-resource detection.

````json
{"openapi":"3.1.0","info":{"title":"Tools","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"Response-3":{"type":"object","description":"Complete export object as returned by the API","allOf":[{"$ref":"#/components/schemas/Request-3"},{"$ref":"#/components/schemas/ResourceResponse"},{"$ref":"#/components/schemas/IAResourceResponse"},{"type":"object","properties":{"aiDescription":{"$ref":"#/components/schemas/AIDescription"},"apim":{"$ref":"#/components/schemas/APIM"},"apiIdentifier":{"type":"string","readOnly":true,"description":"API identifier assigned to this import."},"_sourceId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the source resource this import was created from."},"_templateId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the template used to create this import."},"draft":{"type":"boolean","readOnly":true,"description":"Indicates whether this import is in draft state."},"draftExpiresAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the draft version of this import expires."},"debugUntil":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp until which debug logging is enabled for this import."}}}]},"Request-3":{"type":"object","description":"Configuration for import","properties":{"_connectionId":{"type":"string","format":"objectId","description":"A unique identifier representing the specific connection instance within the system. This ID is used to track, manage, and reference the connection throughout its lifecycle, ensuring accurate routing and association of messages or actions to the correct connection context.\n\n**Field behavior**\n- Uniquely identifies a single connection session.\n- Remains consistent for the duration of the connection.\n- Used to correlate requests, responses, and events related to the connection.\n- Typically generated by the system upon connection establishment.\n\n**Implementation guidance**\n- Ensure the connection ID is globally unique within the scope of the application.\n- Use a format that supports easy storage and retrieval, such as UUID or a similar unique string.\n- Avoid exposing sensitive information within the connection ID.\n- Validate the connection ID format on input to prevent injection or misuse.\n\n**Examples**\n- \"123e4567-e89b-12d3-a456-426614174000\"\n- \"conn-9876543210\"\n- \"a1b2c3d4e5f6g7h8i9j0\"\n\n**Important notes**\n- The connection ID should not be reused for different connections.\n- It is critical for maintaining session integrity and security.\n- Should be treated as an opaque value by clients; its internal structure should not be assumed.\n\n**Dependency chain**\n- Generated during connection initialization.\n- Used by session management and message routing components.\n- Referenced in logs, audits, and debugging tools.\n\n**Technical details**\n- Typically implemented as a string data type.\n- May be generated using UUID libraries or custom algorithms.\n- Stored in connection metadata and passed in API headers or payloads as needed."},"_integrationId":{"type":"string","format":"objectId","description":"The unique identifier for the integration instance associated with the current operation or resource. This ID is used to reference and manage the specific integration within the system, ensuring that actions and data are correctly linked to the appropriate integration context.\n\n**Field behavior**\n- Uniquely identifies an integration instance.\n- Used to associate requests or resources with a specific integration.\n- Typically immutable once set to maintain consistent linkage.\n- May be required for operations involving integration-specific data or actions.\n\n**Implementation guidance**\n- Ensure the ID is generated in a globally unique manner to avoid collisions.\n- Validate the format and existence of the integration ID before processing requests.\n- Use this ID to fetch or update integration-related configurations or data.\n- Secure the ID to prevent unauthorized access or manipulation.\n\n**Examples**\n- \"intg_1234567890abcdef\"\n- \"integration-987654321\"\n- \"a1b2c3d4e5f6g7h8i9j0\"\n\n**Important notes**\n- This ID should not be confused with user IDs or other resource identifiers.\n- It is critical for maintaining the integrity of integration-related workflows.\n- Changes to this ID after creation can lead to inconsistencies or errors.\n\n**Dependency chain**\n- Dependent on the integration creation or registration process.\n- Used by downstream services or components that interact with the integration.\n- May be referenced in logs, audit trails, and monitoring systems.\n\n**Technical details**\n- Typically represented as a string.\n- May follow a specific pattern or prefix to denote integration IDs.\n- Stored in databases or configuration files linked to integration metadata.\n- Used as a key in API calls, database queries, and internal processing logic."},"_connectorId":{"type":"string","format":"objectId","description":"The unique identifier for the connector associated with the resource or operation. This ID is used to reference and manage the specific connector within the system, enabling integration and communication between different components or services.\n\n**Field behavior**\n- Uniquely identifies a connector instance.\n- Used to link resources or operations to a specific connector.\n- Typically immutable once assigned.\n- Required for operations that involve connector-specific actions.\n\n**Implementation guidance**\n- Ensure the connector ID is globally unique within the system.\n- Validate the format and existence of the connector ID before processing.\n- Use consistent naming or ID conventions to facilitate management.\n- Secure the connector ID to prevent unauthorized access or manipulation.\n\n**Examples**\n- \"conn-12345\"\n- \"connector_abcde\"\n- \"uuid-550e8400-e29b-41d4-a716-446655440000\"\n\n**Important notes**\n- The connector ID must correspond to a valid and active connector.\n- Changes to the connector ID may affect linked resources or workflows.\n- Connector IDs are often generated by the system and should not be manually altered.\n\n**Dependency chain**\n- Dependent on the existence of a connector registry or management system.\n- Used by components that require connector-specific configuration or data.\n- May be referenced by authentication, routing, or integration modules.\n\n**Technical details**\n- Typically represented as a string.\n- May follow UUID, GUID, or custom ID formats.\n- Stored and transmitted as part of API requests and responses.\n- Should be indexed in databases for efficient lookup."},"adaptorType":{"type":"string","description":"Specifies the underlying technology adapter that processes this import's operations.\n\nThis field determines:\n- Which connection types are compatible with this import\n- Which API endpoints and protocols will be used\n- Which import-specific configuration objects must be provided\n- The available features and capabilities of the import\n\nThe value must match an available adapter in the system and should correspond to the\nexternal system being accessed. For example:\n- \"HTTPImport\" for generic REST/SOAP APIs\n- \"SalesforceImport\" for Salesforce-specific operations\n- \"NetSuiteDistributedImport\" for NetSuite-specific operations\n- \"FTPImport\" for file transfers via FTP/SFTP\n\nWhen creating an import, this field must be set correctly and cannot be changed afterward\nwithout creating a new import resource.\n\n**Important notes**\n- When using a specific adapter type (e.g., \"SalesforceImport\"), you must also\nprovide the corresponding configuration object (e.g., \"salesforce\").\n- HTTPImport is the default adapter type for all REST/SOAP APIs and should be used unless there is a specialized adapter type for the specific API (such as SalesforceImport for Salesforce and NetSuiteDistributedImport for NetSuite).\n- WrapperImport is used for using a custom adapter built outside of Celigo, and is vary rarely used.\n- RDBMSImport is used for database imports that are built into Celigo such as SQL Server, MySQL, PostgreSQL, Snowflake, Oracle, MariaDB, etc. When using RDBMSImport, populate the \"rdbms\" configuration object.\n- JDBCImport is ONLY for generic JDBC connections that are not one of the built-in RDBMS types. Do NOT use JDBCImport for Snowflake, SQL Server, MySQL, PostgreSQL, or Oracle — use RDBMSImport instead. When using JDBCImport, populate the \"jdbc\" configuration object.\n- NetSuiteDistributedImport is the default for all NetSuite imports. Always use NetSuiteDistributedImport when importing into NetSuite.\n- NetSuiteImport is a legacy adaptor for NetSuite accounts without the Celigo SuiteApp 2.0. Do NOT use NetSuiteImport unless explicitly requested.\n\nThe connection type must be compatible with the adaptorType specified for this import.\nFor example, if adaptorType is \"HTTPImport\", _connectionId must reference a connection with type \"http\".\n","enum":["HTTPImport","FTPImport","AS2Import","S3Import","NetSuiteImport","NetSuiteDistributedImport","SalesforceImport","JDBCImport","RDBMSImport","MongodbImport","DynamodbImport","WrapperImport","AiAgentImport","GuardrailImport","FileSystemImport","ToolImport"]},"externalId":{"type":"string","description":"A unique identifier assigned to an entity by an external system or source, used to reference or correlate the entity across different platforms or services. This ID facilitates integration, synchronization, and data exchange between systems by providing a consistent reference point.\n\n**Field behavior**\n- Must be unique within the context of the external system.\n- Typically immutable once assigned to ensure consistent referencing.\n- Used to link or map records between internal and external systems.\n- May be optional or required depending on integration needs.\n\n**Implementation guidance**\n- Ensure the externalId format aligns with the external system’s specifications.\n- Validate uniqueness to prevent conflicts or data mismatches.\n- Store and transmit the externalId securely to maintain data integrity.\n- Consider indexing this field for efficient lookup and synchronization.\n\n**Examples**\n- \"12345-abcde-67890\"\n- \"EXT-2023-0001\"\n- \"user_987654321\"\n- \"SKU-XYZ-1001\"\n\n**Important notes**\n- The externalId is distinct from internal system identifiers.\n- Changes to the externalId can disrupt data synchronization.\n- Should be treated as a string to accommodate various formats.\n- May include alphanumeric characters, dashes, or underscores.\n\n**Dependency chain**\n- Often linked to integration or synchronization modules.\n- May depend on external system availability and consistency.\n- Used in API calls that involve cross-system data referencing.\n\n**Technical details**\n- Data type: string.\n- Maximum length may vary based on external system constraints.\n- Should support UTF-8 encoding to handle diverse character sets.\n- May require validation against a specific pattern or format."},"as2":{"$ref":"#/components/schemas/As2"},"dynamodb":{"$ref":"#/components/schemas/Dynamodb"},"http":{"$ref":"#/components/schemas/Http"},"ftp":{"$ref":"#/components/schemas/Ftp"},"jdbc":{"$ref":"#/components/schemas/Jdbc"},"mongodb":{"$ref":"#/components/schemas/Mongodb"},"netsuite":{"$ref":"#/components/schemas/NetSuite-2"},"netsuite_da":{"$ref":"#/components/schemas/NetsuiteDistributed"},"rdbms":{"$ref":"#/components/schemas/Rdbms"},"s3":{"$ref":"#/components/schemas/S3-2"},"wrapper":{"$ref":"#/components/schemas/Wrapper-2"},"salesforce":{"$ref":"#/components/schemas/Salesforce-2"},"file":{"$ref":"#/components/schemas/File"},"filesystem":{"$ref":"#/components/schemas/FileSystem"},"aiAgent":{"$ref":"#/components/schemas/AiAgent"},"guardrail":{"$ref":"#/components/schemas/Guardrail"},"name":{"type":"string","description":"The name property represents the identifier or title assigned to an entity, object, or resource within the system. It is typically a human-readable string that uniquely distinguishes the item from others in the same context. This property is essential for referencing, searching, and displaying the entity in user interfaces and API responses.\n\n**Field behavior**\n- Must be a non-empty string.\n- Should be unique within its scope to avoid ambiguity.\n- Often used as a primary label in UI components and logs.\n- May have length constraints depending on system requirements.\n- Typically case-sensitive or case-insensitive based on implementation.\n\n**Implementation guidance**\n- Validate the input to ensure it meets length and character requirements.\n- Enforce uniqueness if required by the application context.\n- Sanitize input to prevent injection attacks or invalid characters.\n- Support internationalization by allowing Unicode characters if applicable.\n- Provide clear error messages when validation fails.\n\n**Examples**\n- \"John Doe\"\n- \"Invoice_2024_001\"\n- \"MainServer\"\n- \"Project Phoenix\"\n- \"user@example.com\"\n\n**Important notes**\n- Avoid using reserved keywords or special characters that may interfere with system processing.\n- Consider trimming whitespace from the beginning and end of the string.\n- The name should be meaningful and descriptive to improve usability.\n- Changes to the name might affect references or links in other parts of the system.\n\n**Dependency chain**\n- May be referenced by other properties such as IDs, URLs, or display labels.\n- Could be linked to permissions or access control mechanisms.\n- Often used in search and filter operations within the API.\n\n**Technical details**\n- Data type: string.\n- Encoding: UTF-8 recommended.\n- Maximum length: typically defined by system constraints (e.g., 255 characters).\n- Validation rules: regex patterns may be applied to restrict allowed characters.\n- Storage considerations: indexed for fast lookup if used frequently in queries."},"description":{"type":"string","description":"A detailed textual explanation that describes the purpose, characteristics, or context of the associated entity or item. This field is intended to provide users with clear and comprehensive information to better understand the subject it represents.\n**Field behavior**\n- Accepts plain text or formatted text depending on implementation.\n- Should be concise yet informative, avoiding overly technical jargon unless necessary.\n- May support multi-line input to allow detailed explanations.\n- Typically optional but recommended for clarity.\n\n**Implementation guidance**\n- Ensure the description is user-friendly and accessible to the target audience.\n- Avoid including sensitive or confidential information.\n- Use consistent terminology aligned with the overall documentation or product language.\n- Consider supporting markdown or rich text formatting if applicable.\n\n**Examples**\n- \"This API endpoint retrieves user profile information including name, email, and preferences.\"\n- \"A unique identifier assigned to each transaction for tracking purposes.\"\n- \"Specifies the date and time when the event occurred, formatted in ISO 8601.\"\n\n**Important notes**\n- Keep descriptions up to date with any changes in functionality or behavior.\n- Avoid redundancy with other fields; focus on clarifying the entity’s role or usage.\n- Descriptions should not contain executable code or commands.\n\n**Dependency chain**\n- Often linked to the entity or property it describes.\n- May influence user understanding and correct usage of the associated field.\n- Can be referenced in user guides, API documentation, or UI tooltips.\n\n**Technical details**\n- Typically stored as a string data type.\n- May have length constraints depending on system limitations.\n- Encoding should support UTF-8 to accommodate international characters.\n- Should be sanitized to prevent injection attacks if rendered in UI contexts.","maxLength":10240},"unencrypted":{"type":"object","description":"Indicates whether the data or content is stored or transmitted without encryption, meaning it is in plain text and not protected by any cryptographic methods. This flag helps determine if additional security measures are necessary to safeguard sensitive information.\n\n**Field behavior**\n- Represents a boolean state where `true` means data is unencrypted and `false` means data is encrypted.\n- Used to identify if the data requires encryption before storage or transmission.\n- May influence processing logic related to data security and compliance.\n\n**Implementation guidance**\n- Ensure this field is explicitly set to reflect the actual encryption status of the data.\n- Use this flag to trigger encryption routines or security checks when data is unencrypted.\n- Validate the field to prevent false positives that could lead to security vulnerabilities.\n\n**Examples**\n- `true` indicating a password stored in plain text (not recommended).\n- `false` indicating a file that has been encrypted before upload.\n- `true` for a message sent over an unsecured channel.\n\n**Important notes**\n- Storing or transmitting data with `unencrypted` set to `true` can expose sensitive information to unauthorized access.\n- This field does not perform encryption itself; it only signals the encryption status.\n- Proper handling of unencrypted data is critical for compliance with data protection regulations.\n\n**Dependency chain**\n- Often used in conjunction with fields specifying encryption algorithms or keys.\n- May affect downstream processes such as logging, auditing, or alerting mechanisms.\n- Related to security policies and access control configurations.\n\n**Technical details**\n- Typically represented as a boolean value.\n- Should be checked before performing operations that assume data confidentiality.\n- May be integrated with security frameworks to enforce encryption standards."},"sampleData":{"type":"object","description":"An array or collection of sample data entries used to demonstrate or test the functionality of the API or application. This data serves as example input or output to help users understand the expected format, structure, and content. It can include various data types depending on the context, such as strings, numbers, objects, or nested arrays.\n\n**Field behavior**\n- Contains example data that mimics real-world usage scenarios.\n- Can be used for testing, validation, or demonstration purposes.\n- May be optional or required depending on the API specification.\n- Should be representative of typical data to ensure meaningful examples.\n\n**Implementation guidance**\n- Populate with realistic and relevant sample entries.\n- Ensure data types and structures align with the API schema.\n- Avoid sensitive or personally identifiable information.\n- Update regularly to reflect any changes in the API data model.\n\n**Examples**\n- A list of user profiles with names, emails, and roles.\n- Sample product entries with IDs, descriptions, and prices.\n- Example sensor readings with timestamps and values.\n- Mock transaction records with amounts and statuses.\n\n**Important notes**\n- Sample data is for illustrative purposes and may not reflect actual production data.\n- Should not be used for live processing or decision-making.\n- Ensure compliance with data privacy and security standards when creating samples.\n\n**Dependency chain**\n- Dependent on the data model definitions within the API.\n- May influence or be influenced by validation rules and schema constraints.\n- Often linked to documentation and testing frameworks.\n\n**Technical details**\n- Typically formatted as JSON arrays or objects.\n- Can include nested structures to represent complex data.\n- Size and complexity should be balanced to optimize performance and clarity.\n- May be embedded directly in API responses or provided as separate resources."},"distributed":{"type":"boolean","description":"Indicates whether the import is using a distributed adaptor, such as a NetSuite Distributed Import.\n**Field behavior**\n- Accepts boolean values: true or false.\n- When true, the import is using a distributed adaptor such as a NetSuite Distributed Import.\n- When false, the import is not using a distributed adaptor.\n\n**Examples**\n- true: When the adaptorType is NetSuiteDistributedImport\n- false: In all other cases"},"maxAttempts":{"type":"number","description":"Specifies the maximum number of attempts allowed for a particular operation or action before it is considered failed or terminated. This property helps control retry logic and prevents infinite loops or excessive retries in processes such as authentication, data submission, or task execution.  \n**Field behavior:**  \n- Defines the upper limit on retry attempts for an operation.  \n- Once the maximum attempts are reached, no further retries are made.  \n- Can be used to trigger fallback mechanisms or error handling after exceeding attempts.  \n**Implementation** GUIDANCE:  \n- Set to a positive integer value representing the allowed retry count.  \n- Should be configured based on the operation's criticality and expected failure rates.  \n- Consider exponential backoff or delay strategies in conjunction with maxAttempts.  \n- Validate input to ensure it is a non-negative integer.  \n**Examples:**  \n- 3 (allowing up to three retry attempts)  \n- 5 (permitting five attempts before failure)  \n- 0 (disabling retries entirely)  \n**Important notes:**  \n- Setting this value too high may cause unnecessary resource consumption or delays.  \n- Setting it too low may result in premature failure without sufficient retry opportunities.  \n- Should be used in combination with timeout settings for robust error handling.  \n**Dependency chain:**  \n- Often used alongside properties like retryDelay, timeout, or backoffStrategy.  \n- May influence or be influenced by error handling or circuit breaker configurations.  \n**Technical details:**  \n- Typically represented as an integer data type.  \n- Enforced by the application logic or middleware managing retries.  \n- May be configurable at runtime or deployment time depending on system design."},"ignoreExisting":{"type":"boolean","description":"**CRITICAL FIELD:** Controls whether the import should skip records that already exist in the target system rather than attempting to update them or throwing errors.\n\nWhen set to true, the import will:\n- Check if each record already exists in the target system (using a lookup configured in the import, or if the record coming into the import has a value populated in the field configured in the ignoreExtract property)\n- Skip processing for records that are found to already exist\n- Continue processing only records that don't exist (new records)\n- This is for CREATE operations where you want to avoid duplicates\n\nWhen set to false or undefined (default):\n- All records are processed normally in the creation process (if applicable)\n- Existing records may be duplicated or may cause errors depending on the API\n- No special existence checking is performed\n\n**When to set this to true (MUST detect these PATTERNS)**\n\n**ALWAYS set ignoreExisting to true** when the user's prompt includes ANY of these patterns:\n\n**Primary Patterns (very common)**\n- \"ignore existing\" / \"ignoring existing\" / \"ignore any existing\"\n- \"skip existing\" / \"skipping existing\" / \"skip any existing\"\n- \"while ignoring existing\" / \"while skipping existing\"\n- \"skip duplicates\" / \"avoid duplicates\" / \"ignore duplicates\"\n\n**Secondary Patterns**\n- \"only create new\" / \"only add new\" / \"create new only\"\n- \"don't update existing\" / \"avoid updating existing\" / \"without updating\"\n- \"create if doesn't exist\" / \"add if doesn't exist\"\n- \"insert only\" / \"create only\"\n- \"skip if already exists\" / \"ignore if exists\"\n- \"don't process existing\" / \"bypass existing\"\n\n**Context Clues**\n- User says \"create\" or \"insert\" AND mentions checking/matching against existing data\n- User wants to add records but NOT modify what's already there\n- User is concerned about duplicate prevention during creation\n\n**Examples**\n\n**Should set ignoreExisting: true**\n- \"Create customer records in Shopify while ignoring existing customers\" → ignoreExisting: true\n- \"Create vendor records while ignoring existing vendors\" → ignoreExisting: true\n- \"Import new products only, skip existing ones\" → ignoreExisting: true\n- \"Add orders to the system, ignore any that already exist\" → ignoreExisting: true\n- \"Create accounts but don't update existing ones\" → ignoreExisting: true\n- \"Insert new contacts, skip duplicates\" → ignoreExisting: true\n- \"Create records, skipping any that match existing data\" → ignoreExisting: true\n\n**Should not set ignoreExisting: true**\n- \"Update existing customer records\" → ignoreExisting: false (update operation)\n- \"Create or update records\" → ignoreExisting: false (upsert operation)\n- \"Sync all records\" → ignoreExisting: false (sync/upsert operation)\n\n**Important**\n- This field is typically used with INSERT operations\n- Common pattern: \"Create X while ignoring existing X\" → MUST set to true\n\n**When to leave this false/undefined**\n\nLeave this false or undefined when:\n- The prompt doesn't mention skipping or ignoring existing records\n- The import is only performing updates (no inserts)\n- The prompt says \"sync\", \"update\", or \"upsert\"\n\n**Important notes**\n- This flag requires a properly configured lookup to identify existing records, or have an ignoreExtract field configured to identify the field that is used to determine if the record already exists.\n- The lookup typically checks a unique identifier field (id, email, external_id, etc.)\n- When true, existing records are silently skipped (not updated, not errored)\n- Use this for insert-only or create-only operations\n- Do NOT use this if the user wants to update existing records\n\n**Technical details**\n- Works in conjunction with lookup configurations to determine if a record exists or has an ignoreExtract field configured to determine if the record already exists.\n- The lookup queries the target system before attempting to create the record\n- If the lookup returns a match, the record is skipped\n- If the lookup returns no match, the record is processed normally"},"ignoreMissing":{"type":"boolean","description":"Indicates whether the system should ignore missing or non-existent resources during processing. When set to true, the operation will proceed without error even if some referenced items are not found; when false, the absence of required resources will cause the operation to fail or return an error.  \n**Field behavior:**  \n- Controls error handling related to missing resources.  \n- Allows operations to continue gracefully when some data is unavailable.  \n- Affects the robustness and fault tolerance of the process.  \n**Implementation guidance:**  \n- Use true to enable lenient processing where missing data is acceptable.  \n- Use false to enforce strict validation and fail fast on missing resources.  \n- Ensure that downstream logic can handle partial results if ignoring missing items.  \n**Examples:**  \n- ignoreMissing: true — skips over missing files during batch processing.  \n- ignoreMissing: false — throws an error if a referenced database entry is not found.  \n**Important notes:**  \n- Setting this to true may lead to incomplete results if missing data is critical.  \n- Default behavior should be clearly documented to avoid unexpected failures.  \n- Consider logging or reporting missing items even when ignoring them.  \n**Dependency** CHAIN:  \n- Often used in conjunction with resource identifiers or references.  \n- May impact validation and error reporting modules.  \n**Technical details:**  \n- Typically a boolean flag.  \n- Influences control flow and exception handling mechanisms.  \n- Should be clearly defined in API contracts to manage client expectations."},"idLockTemplate":{"type":"string","description":"The unique identifier for the lock template used to define the configuration and behavior of a lock within the system. This ID links the lock instance to a predefined template that specifies its properties, access rules, and operational parameters.\n\n**Field behavior**\n- Uniquely identifies a lock template within the system.\n- Used to associate a lock instance with its configuration template.\n- Typically immutable once assigned to ensure consistent behavior.\n\n**Implementation guidance**\n- Must be a valid identifier corresponding to an existing lock template.\n- Should be validated against the list of available templates before assignment.\n- Ensure uniqueness to prevent conflicts between different lock configurations.\n\n**Examples**\n- \"template-12345\"\n- \"lockTemplateA1B2C3\"\n- \"LT-987654321\"\n\n**Important notes**\n- Changing the template ID after lock creation may lead to inconsistent lock behavior.\n- The template defines critical lock parameters; ensure the correct template is referenced.\n- This field is essential for lock provisioning and management workflows.\n\n**Dependency chain**\n- Depends on the existence of predefined lock templates in the system.\n- Used by lock management and access control modules to apply configurations.\n\n**Technical details**\n- Typically represented as a string or UUID.\n- Should conform to the system’s identifier format standards.\n- Indexed for efficient lookup and retrieval in the database."},"dataURITemplate":{"type":"string","description":"A template string used to construct data URIs dynamically by embedding variable placeholders that are replaced with actual values at runtime. This template facilitates the generation of standardized data URIs for accessing or referencing resources within the system.\n\n**Field behavior**\n- Accepts a string containing placeholders for variables.\n- Placeholders are replaced with corresponding runtime values to form a complete data URI.\n- Enables dynamic and flexible URI generation based on context or input parameters.\n- Supports standard URI encoding where necessary to ensure valid URI formation.\n\n**Implementation guidance**\n- Define clear syntax for variable placeholders within the template (e.g., using curly braces or another delimiter).\n- Ensure that all required variables are provided at runtime to avoid incomplete URIs.\n- Validate the final constructed URI to confirm it adheres to URI standards.\n- Consider supporting optional variables with default values to increase template flexibility.\n\n**Examples**\n- \"data:text/plain;base64,{base64EncodedData}\"\n- \"data:{mimeType};charset={charset},{data}\"\n- \"data:image/{imageFormat};base64,{encodedImageData}\"\n\n**Important notes**\n- The template must produce a valid data URI after variable substitution.\n- Proper encoding of variable values is critical to prevent malformed URIs.\n- This property is essential for scenarios requiring inline resource embedding or data transmission via URIs.\n- Misconfiguration or missing variables can lead to invalid or unusable URIs.\n\n**Dependency chain**\n- Relies on the availability of runtime variables corresponding to placeholders.\n- May depend on encoding utilities to prepare variable values.\n- Often used in conjunction with data processing or resource generation components.\n\n**Technical details**\n- Typically a UTF-8 encoded string.\n- Supports standard URI schemes and encoding rules.\n- Variable placeholders should be clearly defined and consistently parsed.\n- May integrate with templating engines or string interpolation mechanisms."},"oneToMany":{"$ref":"#/components/schemas/OneToMany"},"pathToMany":{"$ref":"#/components/schemas/PathToMany"},"blobKeyPath":{"type":"string","description":"Specifies the file system path to the blob key, which uniquely identifies the binary large object (blob) within the storage system. This path is used to locate and access the blob data efficiently during processing or retrieval operations.\n**Field behavior**\n- Represents a string path pointing to the location of the blob key.\n- Used as a reference to access the associated blob data.\n- Typically immutable once set to ensure consistent access.\n- May be required for operations involving blob retrieval or manipulation.\n**Implementation guidance**\n- Ensure the path is valid and correctly formatted according to the storage system's conventions.\n- Validate the existence of the blob key at the specified path before processing.\n- Handle cases where the path may be null or empty gracefully.\n- Secure the path to prevent unauthorized access to blob data.\n**Examples**\n- \"/data/blobs/abc123def456\"\n- \"blobstore/keys/2024/06/blobkey789\"\n- \"s3://bucket-name/blob-keys/key123\"\n**Important notes**\n- The blobKeyPath must correspond to an existing blob key in the storage backend.\n- Incorrect or malformed paths can lead to failures in blob retrieval.\n- Access permissions to the path must be properly configured.\n- The format of the path may vary depending on the underlying storage technology.\n**Dependency chain**\n- Dependent on the storage system's directory or key structure.\n- Used by blob retrieval and processing components.\n- May be linked to metadata describing the blob.\n**Technical details**\n- Typically represented as a UTF-8 encoded string.\n- May include hierarchical directory structures or URI schemes.\n- Should be sanitized to prevent injection or path traversal vulnerabilities.\n- Often used as a lookup key in blob storage APIs or databases."},"blob":{"type":"boolean","description":"The binary large object (blob) representing the raw data content to be processed, stored, or transmitted. This field typically contains encoded or serialized data such as images, files, or other multimedia content in a compact binary format.\n\n**Field behavior**\n- Holds the actual binary data payload.\n- Can represent various data types including images, documents, or other file formats.\n- May be encoded in formats like Base64 for safe transmission over text-based protocols.\n- Treated as opaque data by the API, with no interpretation unless specified.\n\n**Implementation guidance**\n- Ensure the blob data is properly encoded (e.g., Base64) if the transport medium requires text-safe encoding.\n- Validate the size and format of the blob to meet API constraints.\n- Handle decoding and encoding consistently on both client and server sides.\n- Use streaming or chunking for very large blobs to optimize performance.\n\n**Examples**\n- A Base64-encoded JPEG image file.\n- A serialized JSON object converted into a binary format.\n- A PDF document encoded as a binary blob.\n- An audio file represented as a binary stream.\n\n**Important notes**\n- The blob content is typically opaque and should not be altered during transmission.\n- Size limits may apply depending on API or transport constraints.\n- Proper encoding and decoding are critical to avoid data corruption.\n- Security considerations should be taken into account when handling binary data.\n\n**Dependency chain**\n- May depend on encoding schemes (e.g., Base64) for safe transmission.\n- Often used in conjunction with metadata fields describing the blob type or format.\n- Requires appropriate content-type headers or descriptors for correct interpretation.\n\n**Technical details**\n- Usually represented as a byte array or Base64-encoded string in JSON APIs.\n- May require MIME type specification to indicate the nature of the data.\n- Handling may involve buffer management and memory considerations.\n- Supports binary-safe transport mechanisms to preserve data integrity."},"assistant":{"type":"string","description":"Specifies the configuration and behavior settings for the AI assistant that will interact with the user. This property defines how the assistant responds, including its personality, knowledge scope, response style, and any special instructions or constraints that guide its operation.\n\n**Field behavior**\n- Determines the assistant's tone, style, and manner of communication.\n- Controls the knowledge base or data sources the assistant can access.\n- Enables customization of the assistant's capabilities and limitations.\n- May include parameters for language, verbosity, and response format.\n\n**Implementation guidance**\n- Ensure the assistant configuration aligns with the intended user experience.\n- Validate that all required sub-properties within the assistant configuration are correctly set.\n- Support dynamic updates to the assistant settings to adapt to different contexts or user needs.\n- Provide defaults for unspecified settings to maintain consistent behavior.\n\n**Examples**\n- Setting the assistant to a formal tone with technical expertise.\n- Configuring the assistant to provide concise answers with references.\n- Defining the assistant to operate within a specific domain, such as healthcare or finance.\n- Enabling multi-language support for the assistant responses.\n\n**Important notes**\n- Changes to the assistant property can significantly affect user interaction quality.\n- Properly securing and validating assistant configurations is critical to prevent misuse.\n- The assistant's behavior should comply with ethical guidelines and privacy regulations.\n- Overly restrictive settings may limit the assistant's usefulness, while too broad settings may reduce relevance.\n\n**Dependency chain**\n- May depend on user preferences or session context.\n- Interacts with the underlying AI model and its capabilities.\n- Influences downstream processing of user inputs and outputs.\n- Can be linked to external knowledge bases or APIs for enhanced responses.\n\n**Technical details**\n- Typically structured as an object containing multiple nested properties.\n- May include fields such as personality traits, knowledge cutoff dates, and response constraints.\n- Supports serialization and deserialization for API communication.\n- Requires compatibility with the AI platform's configuration schema."},"deleteAfterImport":{"type":"boolean","description":"Indicates whether the source file should be deleted automatically after the import process completes successfully. This boolean flag helps manage storage by removing files that are no longer needed once their data has been ingested.\n\n**Field behavior**\n- When set to true, the system deletes the source file immediately after a successful import.\n- When set to false or omitted, the source file remains intact after import.\n- Deletion only occurs if the import process completes without errors.\n\n**Implementation guidance**\n- Validate that the import was successful before deleting the file.\n- Ensure proper permissions are in place to delete the file.\n- Consider logging deletion actions for audit purposes.\n- Provide clear user documentation about the implications of enabling this flag.\n\n**Examples**\n- true: The file \"data.csv\" will be removed after import.\n- false: The file \"data.csv\" will remain after import for manual review or backup.\n\n**Important notes**\n- Enabling this flag may result in data loss if the file is needed for re-import or troubleshooting.\n- Use with caution in environments where file retention policies are strict.\n- This flag does not affect files that fail to import.\n\n**Dependency chain**\n- Depends on the successful completion of the import operation.\n- May interact with file system permissions and cleanup routines.\n\n**Technical details**\n- Typically implemented as a boolean data type.\n- Triggers a file deletion API or system call post-import.\n- Should handle exceptions gracefully to avoid orphaned files or unintended deletions."},"assistantMetadata":{"type":"object","description":"Metadata containing additional information about the assistant, such as configuration details, versioning, capabilities, and any custom attributes that help in managing or identifying the assistant's behavior and context. This metadata supports enhanced control, monitoring, and customization of the assistant's interactions.\n\n**Field behavior**\n- Holds supplementary data that describes or configures the assistant.\n- Can include static or dynamic information relevant to the assistant's operation.\n- Used to influence or inform the assistant's responses and functionality.\n- May be updated or extended to reflect changes in the assistant's capabilities or environment.\n\n**Implementation guidance**\n- Structure metadata in a clear, consistent format (e.g., key-value pairs).\n- Ensure sensitive information is not exposed through metadata.\n- Use metadata to enable feature toggles, version tracking, or context awareness.\n- Validate metadata content to maintain integrity and compatibility.\n\n**Examples**\n- Version number of the assistant (e.g., \"v1.2.3\").\n- Supported languages or locales.\n- Enabled features or modules.\n- Custom tags indicating the assistant's domain or purpose.\n\n**Important notes**\n- Metadata should not contain user-specific or sensitive personal data.\n- Keep metadata concise to avoid performance overhead.\n- Changes in metadata may affect assistant behavior; manage updates carefully.\n- Metadata is primarily for internal use and may not be exposed to end users.\n\n**Dependency chain**\n- May depend on assistant configuration settings.\n- Can influence or be referenced by response generation logic.\n- Interacts with monitoring and analytics systems for tracking.\n\n**Technical details**\n- Typically represented as a JSON object or similar structured data.\n- Should be easily extensible to accommodate future attributes.\n- Must be compatible with the overall assistant API schema.\n- Should support serialization and deserialization without data loss."},"useTechAdaptorForm":{"type":"boolean","description":"Indicates whether the technical adaptor form should be utilized in the current process or workflow. This boolean flag determines if the system should enable and display the technical adaptor form interface for user interaction or automated processing.\n\n**Field behavior**\n- When set to true, the technical adaptor form is activated and presented to the user or system.\n- When set to false, the technical adaptor form is bypassed or hidden.\n- Influences the flow of data input and validation related to technical adaptor configurations.\n\n**Implementation guidance**\n- Default to false if the technical adaptor form is optional or not always required.\n- Ensure that enabling this flag triggers all necessary UI components and backend processes related to the technical adaptor form.\n- Validate that dependent fields or modules respond appropriately when this flag changes state.\n\n**Examples**\n- true: The system displays the technical adaptor form for configuration.\n- false: The system skips the technical adaptor form and proceeds without it.\n\n**Important notes**\n- Changing this flag may affect downstream processing and data integrity.\n- Must be synchronized with user permissions and feature availability.\n- Should be clearly documented in user guides if exposed in UI settings.\n\n**Dependency chain**\n- May depend on user roles or feature toggles enabling technical adaptor functionality.\n- Could influence or be influenced by other configuration flags related to adaptor workflows.\n\n**Technical details**\n- Typically implemented as a boolean data type.\n- Used in conditional logic to control UI rendering and backend processing paths.\n- May trigger event listeners or hooks when its value changes."},"distributedAdaptorData":{"type":"object","description":"Contains configuration and operational data specific to distributed adaptors used within the system. This data includes parameters, status information, and metadata necessary for managing and monitoring distributed adaptor instances across different nodes or environments. It facilitates synchronization, performance tuning, and error handling for adaptors operating in a distributed architecture.\n\n**Field behavior**\n- Holds detailed information about each distributed adaptor instance.\n- Supports dynamic updates to reflect real-time adaptor status and configuration changes.\n- Enables centralized management and monitoring of distributed adaptors.\n- May include nested structures representing various adaptor attributes and metrics.\n\n**Implementation guidance**\n- Ensure data consistency and integrity when updating distributedAdaptorData.\n- Use appropriate serialization formats to handle complex nested data.\n- Implement validation to verify the correctness of adaptor configuration parameters.\n- Design for scalability to accommodate varying numbers of distributed adaptors.\n\n**Examples**\n- Configuration settings for a distributed database adaptor including connection strings and timeout values.\n- Status reports indicating the health and performance metrics of adaptors deployed across multiple servers.\n- Metadata describing adaptor version, deployment environment, and last synchronization timestamp.\n\n**Important notes**\n- This property is critical for systems relying on distributed adaptors to function correctly.\n- Changes to this data should be carefully managed to avoid disrupting adaptor operations.\n- Sensitive information within adaptor data should be secured and access-controlled.\n\n**Dependency chain**\n- Dependent on the overall system configuration and deployment topology.\n- Interacts with monitoring and management modules that consume adaptor data.\n- May influence load balancing and failover mechanisms within the distributed system.\n\n**Technical details**\n- Typically represented as a complex object or map with key-value pairs.\n- May include arrays or lists to represent multiple adaptor instances.\n- Requires efficient parsing and serialization to minimize performance overhead.\n- Should support versioning to handle changes in adaptor data schema over time."},"filter":{"allOf":[{"description":"Filter configuration for selectively processing records during import operations.\n\n**Import-specific behavior**\n\n**Pre-Import Filtering**: Filters are applied to incoming records before they are sent to the destination system. Records that don't match the filter criteria are silently dropped and not imported.\n\n**Available filter fields**\n\nThe fields available for filtering are the data fields from each record being imported.\n"},{"$ref":"#/components/schemas/Filter"}]},"traceKeyTemplate":{"type":"string","description":"A template string used to generate unique trace keys for tracking and correlating requests across distributed systems. This template supports placeholders that can be dynamically replaced with runtime values such as timestamps, unique identifiers, or contextual metadata to create meaningful and consistent trace keys.\n\n**Field behavior**\n- Defines the format and structure of trace keys used in logging and monitoring.\n- Supports dynamic insertion of variables or placeholders to customize trace keys per request.\n- Ensures trace keys are unique and consistent for effective traceability.\n- Can be used to correlate logs, metrics, and traces across different services.\n\n**Implementation guidance**\n- Use clear and descriptive placeholders that map to relevant runtime data (e.g., {timestamp}, {requestId}).\n- Ensure the template produces trace keys that are unique enough to avoid collisions.\n- Validate the template syntax before applying it to avoid runtime errors.\n- Consider including service names or environment identifiers for multi-service deployments.\n- Keep the template concise to avoid excessively long trace keys.\n\n**Examples**\n- \"trace-{timestamp}-{requestId}\"\n- \"{serviceName}-{env}-{uniqueId}\"\n- \"traceKey-{userId}-{sessionId}-{epochMillis}\"\n\n**Important notes**\n- The template must be compatible with the tracing and logging infrastructure in use.\n- Placeholders should be well-defined and documented to ensure consistent usage.\n- Improper templates may lead to trace key collisions or loss of traceability.\n- Avoid sensitive information in trace keys to maintain security and privacy.\n\n**Dependency chain**\n- Relies on runtime context or metadata to replace placeholders.\n- Integrates with logging, monitoring, and tracing systems that consume trace keys.\n- May depend on unique identifier generators or timestamp providers.\n\n**Technical details**\n- Typically implemented as a string with placeholder syntax (e.g., curly braces).\n- Requires a parser or formatter to replace placeholders with actual values at runtime.\n- Should support extensibility to add new placeholders as needed.\n- May need to conform to character restrictions imposed by downstream systems."},"mockResponse":{"type":"object","description":"Defines a predefined response that the system will return when the associated API endpoint is invoked, allowing for simulation of real API behavior without requiring actual backend processing. This is particularly useful for testing, development, and demonstration purposes where consistent and controlled responses are needed.\n\n**Field behavior**\n- Returns the specified mock data instead of executing the real API logic.\n- Can simulate various response scenarios including success, error, and edge cases.\n- Overrides the default response when enabled.\n- Supports static or dynamic content depending on implementation.\n\n**Implementation guidance**\n- Ensure the mock response format matches the expected API response schema.\n- Use realistic data to closely mimic actual API behavior.\n- Update mock responses regularly to reflect changes in the real API.\n- Consider including headers, status codes, and body content in the mock response.\n- Provide clear documentation on when and how the mock response is used.\n\n**Examples**\n- Returning a fixed JSON object representing a user profile.\n- Simulating a 404 error response with an error message.\n- Providing a list of items with pagination metadata.\n- Mocking a successful transaction confirmation message.\n\n**Important notes**\n- Mock responses should not be used in production environments unless explicitly intended.\n- They are primarily for development, testing, and demonstration.\n- Overuse of mock responses can mask issues in the real API implementation.\n- Ensure that consumers of the API are aware when mock responses are active.\n\n**Dependency chain**\n- Dependent on the API endpoint configuration.\n- May interact with request parameters to determine the appropriate mock response.\n- Can be linked with feature flags or environment settings to toggle mock behavior.\n\n**Technical details**\n- Typically implemented as static JSON or XML payloads.\n- May include HTTP status codes and headers.\n- Can be integrated with API gateways or mocking tools.\n- Supports conditional logic in advanced implementations to vary responses."},"_ediProfileId":{"type":"string","format":"objectId","description":"The unique identifier for the Electronic Data Interchange (EDI) profile associated with the transaction or entity. This ID links the current data to a specific EDI configuration that defines the format, protocols, and trading partner details used for electronic communication.\n\n**Field behavior**\n- Uniquely identifies an EDI profile within the system.\n- Used to retrieve or reference EDI settings and parameters.\n- Must be consistent and valid to ensure proper EDI processing.\n- Typically immutable once assigned to maintain data integrity.\n\n**Implementation guidance**\n- Ensure the ID corresponds to an existing and active EDI profile in the system.\n- Validate the format and existence of the ID before processing transactions.\n- Use this ID to fetch EDI-specific rules, mappings, and partner information.\n- Secure the ID to prevent unauthorized changes or misuse.\n\n**Examples**\n- \"EDI12345\"\n- \"PROF-67890\"\n- \"X12_PROFILE_001\"\n\n**Important notes**\n- This ID is critical for routing and formatting EDI messages correctly.\n- Incorrect or missing IDs can lead to failed or misrouted EDI transmissions.\n- The ID should be managed centrally to avoid duplication or conflicts.\n\n**Dependency chain**\n- Depends on the existence of predefined EDI profiles in the system.\n- Used by EDI processing modules to apply correct communication standards.\n- May be linked to trading partner configurations and document types.\n\n**Technical details**\n- Typically a string or alphanumeric value.\n- Stored in a database with indexing for quick lookup.\n- May follow a naming convention defined by the organization or EDI standards.\n- Used as a foreign key in relational data models involving EDI transactions."},"parsers":{"type":["string","null"],"enum":["1"],"description":"A collection of parser configurations that define how input data should be interpreted and processed. Each parser specifies rules, patterns, or formats to extract meaningful information from raw data sources, enabling the system to handle diverse data types and structures effectively. This property allows customization and extension of parsing capabilities to accommodate various input formats.\n\n**Field behavior**\n- Accepts multiple parser definitions as an array or list.\n- Each parser operates independently to process specific data formats.\n- Parsers are applied in the order they are defined unless otherwise specified.\n- Supports enabling or disabling individual parsers dynamically.\n- Can include built-in or custom parser implementations.\n\n**Implementation guidance**\n- Ensure parsers are well-defined with clear matching criteria and extraction rules.\n- Validate parser configurations to prevent conflicts or overlaps.\n- Provide mechanisms to add, update, or remove parsers without downtime.\n- Support extensibility to integrate new parsing logic as needed.\n- Document each parser’s purpose and expected input/output formats.\n\n**Examples**\n- A JSON parser that extracts fields from JSON-formatted input.\n- A CSV parser that splits input lines into columns based on delimiters.\n- A regex-based parser that identifies patterns within unstructured text.\n- An XML parser that navigates hierarchical data structures.\n- A custom parser designed to interpret proprietary log file formats.\n\n**Important notes**\n- Incorrect parser configurations can lead to data misinterpretation or processing errors.\n- Parsers should be optimized for performance to handle large volumes of data efficiently.\n- Consider security implications when parsing untrusted input to avoid injection attacks.\n- Testing parsers with representative sample data is crucial for reliability.\n- Parsers may depend on external libraries or modules; ensure compatibility.\n\n**Dependency chain**\n- Relies on input data being available and accessible.\n- May depend on schema definitions or data contracts for accurate parsing.\n- Interacts with downstream components that consume parsed output.\n- Can be influenced by global settings such as character encoding or locale.\n- May require synchronization with data validation and transformation steps.\n\n**Technical details**\n- Typically implemented as modular components or plugins.\n- Configurations may include pattern definitions, field mappings, and error handling rules.\n- Supports various data formats including text, binary, and structured documents.\n- May expose APIs or interfaces for runtime configuration and monitoring.\n- Often integrated with logging and debugging tools to trace parsing operations."},"hooks":{"type":"object","properties":{"preMap":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the function to be executed during the pre-mapping hook phase. This function allows for custom logic to be applied before the main mapping process begins, enabling data transformation, validation, or any preparatory steps required. It should be defined in a way that it can be invoked with the appropriate context and input data.\n\n**Field behavior**\n- Executed before the mapping process starts.\n- Receives input data and context for processing.\n- Can modify or validate data prior to mapping.\n- Should return the processed data or relevant output for the next stage.\n\n**Implementation guidance**\n- Define the function with clear input parameters and expected output.\n- Ensure the function handles errors gracefully to avoid interrupting the mapping flow.\n- Keep the function focused on pre-mapping concerns only.\n- Test the function independently to verify its behavior before integration.\n\n**Examples**\n- A function that normalizes input data formats.\n- A function that filters out invalid entries.\n- A function that enriches data with additional attributes.\n- A function that logs input data for auditing purposes.\n\n**Important notes**\n- The function must be synchronous or properly handle asynchronous operations.\n- Avoid side effects that could impact other parts of the mapping process.\n- Ensure compatibility with the overall data pipeline and mapping framework.\n- Document the function’s purpose and usage clearly for maintainability.\n\n**Dependency chain**\n- Invoked after the preMap hook is triggered.\n- Outputs data consumed by the subsequent mapping logic.\n- May depend on external utilities or services for data processing.\n\n**Technical details**\n- Typically implemented as a JavaScript or TypeScript function.\n- Receives parameters such as input data object and context metadata.\n- Returns a transformed data object or a promise resolving to it.\n- Integrated into the mapping engine’s hook system for execution timing."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier for the script to be executed during the pre-mapping hook phase. This ID is used to reference and invoke the specific script that performs custom logic or transformations before the main mapping process begins.\n**Field behavior**\n- Specifies which script is triggered before the mapping operation.\n- Must correspond to a valid and existing script within the system.\n- Determines the custom pre-processing logic applied to data.\n**Implementation guidance**\n- Ensure the script ID is correctly registered and accessible in the environment.\n- Validate the script's compatibility with the preMap hook context.\n- Use consistent naming conventions for script IDs to avoid conflicts.\n**Examples**\n- \"script12345\"\n- \"preMapTransform_v2\"\n- \"hookScript_001\"\n**Important notes**\n- An invalid or missing script ID will result in the preMap hook being skipped or causing an error.\n- The script associated with this ID should be optimized for performance to avoid delays.\n- Permissions must be set appropriately to allow execution of the referenced script.\n**Dependency chain**\n- Depends on the existence of the script repository or script management system.\n- Linked to the preMap hook lifecycle event in the mapping process.\n- May interact with input data and influence subsequent mapping steps.\n**Technical details**\n- Typically represented as a string identifier.\n- Used internally by the system to locate and execute the script.\n- May be stored in a database or configuration file referencing available scripts."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the current operation or context. This ID is used to reference and manage the specific stack within the system, ensuring that all related resources and actions are correctly linked.  \n**Field behavior:**  \n- Uniquely identifies a stack instance within the environment.  \n- Used internally to track and manage stack-related operations.  \n- Typically immutable once assigned for a given stack lifecycle.  \n**Implementation guidance:**  \n- Should be generated or assigned by the system managing the stacks.  \n- Must be validated to ensure uniqueness within the scope of the application.  \n- Should be securely stored and transmitted to prevent unauthorized access or manipulation.  \n**Examples:**  \n- \"stack-12345abcde\"  \n- \"arn:aws:cloudformation:us-east-1:123456789012:stack/my-stack/50d6f0a0-1234-11e8-9a8b-0a1234567890\"  \n- \"projX-envY-stackZ\"  \n**Important notes:**  \n- This ID is critical for correlating resources and operations to the correct stack.  \n- Changing the stack ID after creation can lead to inconsistencies and errors.  \n- Should not be confused with other identifiers like resource IDs or deployment IDs.  \n**Dependency** CHAIN:  \n- Depends on the stack creation or registration process to obtain the ID.  \n- Used by hooks, deployment scripts, and resource managers to reference the stack.  \n**Technical details:**  \n- Typically a string value, possibly following a specific format or pattern.  \n- May include alphanumeric characters, dashes, and colons depending on the system.  \n- Often used as a key in databases or APIs to retrieve stack information."},"configuration":{"type":"object","description":"Configuration settings that define the behavior and parameters for the preMap hook, allowing customization of its execution prior to the mapping process. This property enables users to specify options such as input validation rules, transformation parameters, or conditional logic that tailor the hook's operation to specific requirements.\n\n**Field behavior**\n- Accepts a structured object containing key-value pairs relevant to the preMap hook.\n- Influences how the preMap hook processes data before mapping occurs.\n- Can enable or disable specific features or validations within the hook.\n- Supports dynamic adjustment of hook behavior based on provided settings.\n\n**Implementation guidance**\n- Validate the configuration object against the expected schema to ensure correctness.\n- Provide default values for optional settings to maintain consistent behavior.\n- Document all configurable options clearly for users to understand their impact.\n- Allow extensibility to accommodate future configuration parameters without breaking compatibility.\n\n**Examples**\n- Setting a flag to enable strict input validation before mapping.\n- Specifying a list of fields to exclude from the mapping process.\n- Defining transformation rules such as trimming whitespace or converting case.\n- Providing conditional logic parameters to skip mapping under certain conditions.\n\n**Important notes**\n- Incorrect configuration values may cause the preMap hook to fail or behave unexpectedly.\n- Configuration should be tested thoroughly to ensure it aligns with the intended data processing flow.\n- Changes to configuration may affect downstream processes relying on the mapped data.\n- Sensitive information should not be included in the configuration to avoid security risks.\n\n**Dependency chain**\n- Depends on the preMap hook being enabled and invoked during the processing pipeline.\n- Influences the input data that will be passed to subsequent mapping stages.\n- May interact with other hook configurations or global settings affecting data transformation.\n\n**Technical details**\n- Typically represented as a JSON object or equivalent data structure.\n- Parsed and applied at runtime before the mapping logic executes.\n- Supports nested configuration properties for complex customization.\n- Should be immutable during the execution of the preMap hook to ensure consistency."}},"description":"A hook function that is executed before the mapping process begins. This function allows for custom preprocessing or transformation of data prior to the main mapping logic being applied. It is typically used to modify input data, validate conditions, or set up necessary context for the mapping operation.\n\n**Field behavior**\n- Invoked once before the mapping starts.\n- Receives the raw input data or context.\n- Can modify or replace the input data before mapping.\n- Can abort or alter the flow based on custom logic.\n\n**Implementation guidance**\n- Implement as a synchronous or asynchronous function depending on the environment.\n- Ensure it returns the modified data or context for the mapping to proceed.\n- Handle errors gracefully to avoid disrupting the mapping process.\n- Use for data normalization, enrichment, or validation before mapping.\n\n**Examples**\n- Sanitizing input strings to remove unwanted characters.\n- Adding default values to missing fields.\n- Logging or auditing input data before transformation.\n- Validating input schema and throwing errors if invalid.\n\n**Important notes**\n- This hook is optional; if not provided, mapping proceeds with original input.\n- Changes made here directly affect the mapping outcome.\n- Avoid heavy computations to prevent performance bottlenecks.\n- Should not perform side effects that depend on mapping results.\n\n**Dependency chain**\n- Executed before the main mapping function.\n- Influences the data passed to subsequent mapping hooks or steps.\n- May affect downstream validation or output generation.\n\n**Technical details**\n- Typically implemented as a function with signature (inputData, context) => modifiedData.\n- Supports both synchronous return values and Promises for async operations.\n- Integrated into the mapping pipeline as the first step.\n- Can be configured or replaced depending on mapping framework capabilities."},"postMap":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the name of the function to be executed during the post-map hook phase. This function is invoked after the mapping process completes, allowing for custom processing, validation, or transformation of the mapped data. It should be defined within the appropriate scope and adhere to the expected signature for post-map hook functions.\n\n**Field behavior**\n- Defines the callback function to run after the mapping operation.\n- Enables customization and extension of the mapping logic.\n- Executes only once per mapping cycle during the post-map phase.\n- Can modify or augment the mapped data before final output.\n\n**Implementation guidance**\n- Ensure the function name corresponds to a valid, accessible function in the execution context.\n- The function should handle any necessary error checking and data validation.\n- Avoid long-running or blocking operations within the function to maintain performance.\n- Document the function’s expected inputs and outputs clearly.\n\n**Examples**\n- \"validateMappedData\"\n- \"transformPostMapping\"\n- \"logMappingResults\"\n\n**Important notes**\n- The function must be synchronous or properly handle asynchronous behavior if supported.\n- If the function is not defined or invalid, the post-map hook will be skipped without error.\n- This hook is optional but recommended for complex mapping scenarios requiring additional processing.\n\n**Dependency chain**\n- Depends on the mapping process completing successfully.\n- May rely on data structures produced by the mapping phase.\n- Should be compatible with other hooks or lifecycle events in the mapping pipeline.\n\n**Technical details**\n- Expected to be a string representing the function name.\n- The function should accept parameters as defined by the hook interface.\n- Execution context must have access to the function at runtime.\n- Errors thrown within the function may affect the overall mapping operation depending on error handling policies."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier for the script to be executed in the postMap hook. This ID is used to reference and invoke the specific script after the mapping process completes, enabling custom logic or transformations to be applied.  \n**Field behavior:**  \n- Accepts a string representing the script's unique ID.  \n- Must correspond to an existing script within the system.  \n- Triggers the execution of the associated script after the mapping operation finishes.  \n**Implementation guidance:**  \n- Ensure the script ID is valid and the script is properly deployed before referencing it here.  \n- Use this field to extend or customize the behavior of the mapping process via scripting.  \n- Validate the script ID to prevent runtime errors during hook execution.  \n**Examples:**  \n- \"script_12345\"  \n- \"postMapTransformScript\"  \n- \"customHookScript_v2\"  \n**Important notes:**  \n- The script referenced must be compatible with the postMap hook context.  \n- Incorrect or missing script IDs will result in the hook not executing as intended.  \n- This field is optional if no post-mapping script execution is required.  \n**Dependency chain:**  \n- Depends on the existence of the script repository or script management system.  \n- Relies on the postMap hook lifecycle event to trigger execution.  \n**Technical details:**  \n- Typically a string data type.  \n- Used internally to fetch and execute the script logic.  \n- May require permissions or access rights to execute the referenced script."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the post-map hook. This ID is used to reference and manage the specific stack instance within the system, enabling precise tracking and manipulation of resources or configurations tied to that stack during post-processing operations.\n\n**Field behavior**\n- Uniquely identifies a stack within the system.\n- Used during post-map hook execution to associate actions with the correct stack.\n- Immutable once set to ensure consistent reference throughout the lifecycle.\n- Required for operations that involve stack-specific context or resource management.\n\n**Implementation guidance**\n- Must be a valid and existing stack identifier within the system.\n- Should be validated for format and existence before use.\n- Ensure secure handling to prevent unauthorized access or manipulation.\n- Typically generated by the system and passed to hooks automatically.\n\n**Examples**\n- \"stack-12345\"\n- \"prod-stack-67890\"\n- \"dev-environment-stack-001\"\n\n**Important notes**\n- This ID is critical for linking hook operations to the correct stack context.\n- Incorrect or missing _stackId can lead to failed or misapplied post-map operations.\n- Should not be altered manually once assigned.\n\n**Dependency chain**\n- Depends on the stack creation or registration process that generates the ID.\n- Used by post-map hooks to perform stack-specific logic.\n- May influence downstream processes that rely on stack context.\n\n**Technical details**\n- Typically a string value.\n- Format may vary depending on the system's stack naming conventions.\n- Stored and transmitted securely to maintain integrity.\n- Used as a key in database queries or API calls related to stack management."},"configuration":{"type":"object","description":"Configuration settings that define the behavior and parameters for the postMap hook. This object allows customization of the hook's execution by specifying various options and values that control its operation.\n\n**Field behavior**\n- Accepts a structured object containing key-value pairs relevant to the postMap hook.\n- Determines how the postMap hook processes data after mapping.\n- Can include flags, thresholds, or other parameters influencing hook logic.\n- Optional or required depending on the specific implementation of the postMap hook.\n\n**Implementation guidance**\n- Validate the configuration object against the expected schema before use.\n- Ensure all required fields within the configuration are present and correctly typed.\n- Use default values for any missing optional parameters to maintain consistent behavior.\n- Document all configurable options clearly for users to understand their impact.\n\n**Examples**\n- `{ \"enableLogging\": true, \"maxRetries\": 3 }`\n- `{ \"timeout\": 5000, \"retryOnFailure\": false }`\n- `{ \"mappingStrategy\": \"strict\", \"caseSensitive\": true }`\n\n**Important notes**\n- Incorrect configuration values may cause the postMap hook to fail or behave unexpectedly.\n- Configuration should be tailored to the specific needs of the postMap operation.\n- Changes to configuration may require restarting or reinitializing the hook process.\n\n**Dependency chain**\n- Depends on the postMap hook being invoked.\n- May influence downstream processes that rely on the output of the postMap hook.\n- Interacts with other hook configurations if multiple hooks are chained.\n\n**Technical details**\n- Typically represented as a JSON object or equivalent data structure.\n- Parsed and validated at runtime before the hook executes.\n- Supports nested objects and arrays if the hook logic requires complex configurations."}},"description":"A hook function that is executed after the mapping process is completed. This function allows for custom logic to be applied to the mapped data, enabling modifications, validations, or side effects based on the results of the mapping operation. It receives the mapped data as input and can return a modified version or perform asynchronous operations.\n\n**Field behavior**\n- Invoked immediately after the mapping process finishes.\n- Receives the output of the mapping as its input parameter.\n- Can modify, augment, or validate the mapped data.\n- Supports synchronous or asynchronous execution.\n- The returned value from this hook replaces or updates the mapped data.\n\n**Implementation guidance**\n- Implement as a function that accepts the mapped data object.\n- Ensure proper handling of asynchronous operations if needed.\n- Use this hook to enforce business rules or data transformations post-mapping.\n- Avoid side effects that could interfere with subsequent processing unless intentional.\n- Validate the integrity of the data before returning it.\n\n**Examples**\n- Adjusting date formats or normalizing strings after mapping.\n- Adding computed properties based on mapped fields.\n- Logging or auditing mapped data for monitoring purposes.\n- Filtering out unwanted fields or entries from the mapped result.\n- Triggering notifications or events based on mapped data content.\n\n**Important notes**\n- This hook is optional and only invoked if defined.\n- Errors thrown within this hook may affect the overall mapping operation.\n- The hook should be performant to avoid slowing down the mapping pipeline.\n- Returned data must conform to expected schema to prevent downstream errors.\n\n**Dependency chain**\n- Depends on the completion of the primary mapping function.\n- May influence subsequent processing steps that consume the mapped data.\n- Should be coordinated with pre-mapping hooks to maintain data consistency.\n\n**Technical details**\n- Typically implemented as a callback or promise-returning function.\n- Receives one argument: the mapped data object.\n- Returns either the modified data object or a promise resolving to it.\n- Integrated into the mapping lifecycle after the main mapping logic completes."},"postSubmit":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the name of the function to be executed after the submission process completes. This function is typically used to perform post-submission tasks such as data processing, notifications, or cleanup operations.\n\n**Field behavior**\n- Invoked automatically after the main submission event finishes.\n- Can trigger additional workflows or side effects based on submission results.\n- Supports asynchronous execution depending on the implementation.\n\n**Implementation guidance**\n- Ensure the function name corresponds to a valid, accessible function within the execution context.\n- Validate that the function handles errors gracefully to avoid disrupting the overall submission flow.\n- Document the expected input parameters and output behavior of the function for maintainability.\n\n**Examples**\n- \"sendConfirmationEmail\"\n- \"logSubmissionData\"\n- \"updateUserStatus\"\n\n**Important notes**\n- The function must be idempotent if the submission process can be retried.\n- Avoid long-running operations within this function to prevent blocking the submission pipeline.\n- Security considerations should be taken into account, especially if the function interacts with external systems.\n\n**Dependency chain**\n- Depends on the successful completion of the submission event.\n- May rely on data generated or modified during the submission process.\n- Could trigger downstream hooks or events based on its execution.\n\n**Technical details**\n- Typically referenced by name as a string.\n- Execution context must have access to the function definition.\n- May support both synchronous and asynchronous invocation patterns depending on the platform."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier of the script to be executed after the submission process completes. This ID links the post-submit hook to a specific script that performs additional operations or custom logic once the main submission workflow has finished.\n\n**Field behavior**\n- Specifies which script is triggered automatically after the submission event.\n- Must correspond to a valid and existing script within the system.\n- Controls post-processing actions such as notifications, data transformations, or logging.\n\n**Implementation guidance**\n- Ensure the script ID is correctly referenced and the script is deployed before assigning this property.\n- Validate the script’s permissions and runtime environment compatibility.\n- Use consistent naming or ID conventions to avoid conflicts or errors.\n\n**Examples**\n- \"script_12345\"\n- \"postSubmitCleanupScript\"\n- \"notifyUserScript\"\n\n**Important notes**\n- If the script ID is invalid or missing, the post-submit hook will not execute.\n- Changes to the script ID require redeployment or configuration updates.\n- The script should be idempotent and handle errors gracefully to avoid disrupting the submission flow.\n\n**Dependency chain**\n- Depends on the existence of the script resource identified by this ID.\n- Linked to the post-submit hook configuration within the submission workflow.\n- May interact with other hooks or system components triggered after submission.\n\n**Technical details**\n- Typically represented as a string identifier.\n- Must be unique within the scope of available scripts.\n- Used by the system to locate and invoke the corresponding script at runtime."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the post-submit hook. This ID is used to reference and manage the specific stack instance within the system, enabling operations such as updates, deletions, or status checks after a submission event.\n\n**Field behavior**\n- Uniquely identifies a stack within the system.\n- Used to link post-submit actions to the correct stack.\n- Typically immutable once assigned to ensure consistent referencing.\n- Required for executing stack-specific post-submit logic.\n\n**Implementation guidance**\n- Ensure the ID is generated following the system’s unique identification standards.\n- Validate the presence and format of the stack ID before processing post-submit hooks.\n- Use this ID to fetch or manipulate stack-related data during post-submit operations.\n- Handle cases where the stack ID may not exist or is invalid gracefully.\n\n**Examples**\n- \"stack-12345\"\n- \"arn:aws:cloudformation:us-east-1:123456789012:stack/my-stack/abcde12345\"\n- \"proj-stack-v2\"\n\n**Important notes**\n- This ID must correspond to an existing stack within the environment.\n- Incorrect or missing stack IDs can cause post-submit hooks to fail.\n- The stack ID is critical for audit trails and debugging post-submit processes.\n\n**Dependency chain**\n- Depends on the stack creation or registration process to generate the ID.\n- Used by post-submit hook handlers to identify the target stack.\n- May be referenced by logging, monitoring, or notification systems post-submission.\n\n**Technical details**\n- Typically a string value.\n- May follow a specific format such as UUID, ARN, or custom naming conventions.\n- Stored and transmitted securely to prevent unauthorized access or manipulation.\n- Should be indexed in databases for efficient retrieval during post-submit operations."},"configuration":{"type":"object","description":"Configuration settings for the post-submit hook that define its behavior and parameters. This object contains key-value pairs specifying how the hook should operate after a submission event, including any necessary options, flags, or environment variables.\n\n**Field behavior**\n- Determines the specific actions and parameters used by the post-submit hook.\n- Can include nested settings tailored to the hook’s functionality.\n- Modifies the execution flow or output based on provided configuration values.\n\n**Implementation guidance**\n- Validate the configuration object against the expected schema for the post-submit hook.\n- Support extensibility to allow additional parameters without breaking existing functionality.\n- Ensure sensitive information within the configuration is handled securely.\n- Provide clear error messages if required configuration fields are missing or invalid.\n\n**Examples**\n- Setting a timeout duration for the post-submit process.\n- Specifying environment variables needed during hook execution.\n- Enabling or disabling certain features of the post-submit hook via flags.\n\n**Important notes**\n- The configuration must align with the capabilities of the post-submit hook implementation.\n- Incorrect or incomplete configuration may cause the hook to fail or behave unexpectedly.\n- Configuration changes typically require validation and testing before deployment.\n\n**Dependency chain**\n- Depends on the post-submit hook being triggered successfully.\n- May interact with other hooks or system components based on configuration.\n- Relies on the underlying system to interpret and apply configuration settings correctly.\n\n**Technical details**\n- Typically represented as a JSON object or equivalent data structure.\n- Supports nested objects and arrays for complex configurations.\n- May include string, numeric, boolean, or null values depending on parameters.\n- Parsed and applied at runtime during the post-submit hook execution phase."}},"description":"A callback function or hook that is executed immediately after a form submission process completes. This hook allows for custom logic to be run post-submission, such as handling responses, triggering notifications, updating UI elements, or performing cleanup tasks.\n\n**Field behavior**\n- Invoked only after the form submission has finished, regardless of success or failure.\n- Receives submission result data or error information as arguments.\n- Can be asynchronous to support operations like API calls or state updates.\n- Does not affect the submission process itself but handles post-submission side effects.\n\n**Implementation guidance**\n- Ensure the function handles both success and error scenarios gracefully.\n- Avoid long-running synchronous operations to prevent blocking the UI.\n- Use this hook to trigger any follow-up actions such as analytics tracking or user feedback.\n- Validate that the hook is defined before invocation to prevent runtime errors.\n\n**Examples**\n- Logging submission results to the console.\n- Displaying a success message or error notification to the user.\n- Redirecting the user to a different page after submission.\n- Resetting form fields or updating application state based on submission outcome.\n\n**Important notes**\n- This hook is optional; if not provided, no post-submission actions will be performed.\n- It should not be used to modify the submission data itself; that should be handled before submission.\n- Proper error handling within this hook is crucial to avoid unhandled exceptions.\n\n**Dependency chain**\n- Depends on the form submission process completing.\n- May interact with state management or UI components updated after submission.\n- Can be linked with pre-submit hooks for comprehensive form lifecycle management.\n\n**Technical details**\n- Typically implemented as a function accepting parameters such as submission response and error.\n- Can return a promise to support asynchronous operations.\n- Should be registered in the form configuration under the hooks.postSubmit property.\n- Execution context may vary depending on the form library or framework used."},"postAggregate":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the function to be executed during the post-aggregation hook phase. This function is invoked after the aggregation process completes, allowing for custom processing, transformation, or validation of the aggregated data before it is returned or further processed. The function should be designed to handle the aggregated data structure and can modify, enrich, or analyze the results as needed.\n\n**Field behavior**\n- Executed after the aggregation operation finishes.\n- Receives the aggregated data as input.\n- Can modify or augment the aggregated results.\n- Supports custom logic tailored to specific post-processing requirements.\n\n**Implementation guidance**\n- Ensure the function signature matches the expected input and output formats.\n- Handle potential errors within the function to avoid disrupting the overall aggregation flow.\n- Keep the function efficient to minimize latency in the post-aggregation phase.\n- Validate the output of the function to maintain data integrity.\n\n**Examples**\n- A function that filters aggregated results based on certain criteria.\n- A function that calculates additional metrics from the aggregated data.\n- A function that formats or restructures the aggregated output for downstream consumption.\n\n**Important notes**\n- The function must be deterministic and side-effect free to ensure consistent results.\n- Avoid long-running operations within the function to prevent performance bottlenecks.\n- The function should not alter the original data source, only the aggregated output.\n\n**Dependency chain**\n- Depends on the completion of the aggregation process.\n- May rely on the schema or structure of the aggregated data.\n- Can be linked to subsequent hooks or processing steps that consume the modified data.\n\n**Technical details**\n- Typically implemented as a callback or lambda function.\n- May be written in the same language as the aggregation engine or as a supported scripting language.\n- Should conform to the API’s expected interface for hook functions.\n- Execution context may include metadata about the aggregation operation."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier of the script to be executed after the aggregation process completes. This ID links the post-aggregation hook to a specific script that contains the logic or operations to be performed. It ensures that the correct script is triggered in the post-aggregation phase, enabling customized processing or data manipulation.\n\n**Field behavior**\n- Accepts a string representing the script's unique ID.\n- Used exclusively in the post-aggregation hook context.\n- Triggers the execution of the associated script after aggregation.\n- Must correspond to an existing and accessible script within the system.\n\n**Implementation guidance**\n- Validate that the script ID exists and is active before assignment.\n- Ensure the script linked by this ID is compatible with post-aggregation data.\n- Handle errors gracefully if the script ID is invalid or the script execution fails.\n- Maintain security by restricting script IDs to authorized scripts only.\n\n**Examples**\n- \"script12345\"\n- \"postAggScript_v2\"\n- \"cleanup_after_aggregation\"\n\n**Important notes**\n- The script ID must be unique within the scope of available scripts.\n- Changing the script ID will alter the behavior of the post-aggregation hook.\n- The script referenced should be optimized for performance to avoid delays.\n- Ensure proper permissions are set for the script to execute in this context.\n\n**Dependency chain**\n- Depends on the existence of a script repository or registry.\n- Relies on the aggregation process completing successfully before execution.\n- Interacts with the postAggregate hook mechanism to trigger execution.\n\n**Technical details**\n- Typically represented as a string data type.\n- May be stored in a database or configuration file referencing script metadata.\n- Used by the system's execution engine to locate and run the script.\n- Supports integration with scripting languages or environments supported by the platform."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the post-aggregation hook. This ID is used to reference and manage the specific stack context within which the hook operates, ensuring that the correct stack resources and configurations are applied during the post-aggregation process.\n\n**Field behavior**\n- Uniquely identifies the stack instance related to the post-aggregation hook.\n- Used to link the hook execution context to the appropriate stack environment.\n- Immutable once set to maintain consistency throughout the hook lifecycle.\n\n**Implementation guidance**\n- Must be a valid and existing stack identifier within the system.\n- Should be assigned automatically by the system or provided explicitly during hook configuration.\n- Ensure proper validation to prevent referencing non-existent or unauthorized stacks.\n\n**Examples**\n- \"stack-12345\"\n- \"prod-stack-67890\"\n- \"dev-environment-stack-001\"\n\n**Important notes**\n- This ID is critical for resolving stack-specific resources and permissions.\n- Incorrect or missing _stackId values can lead to hook execution failures.\n- Should be handled securely to prevent unauthorized access to stack data.\n\n**Dependency chain**\n- Depends on the stack management system to generate and maintain stack IDs.\n- Utilized by the post-aggregation hook execution engine to retrieve stack context.\n- May influence downstream processes that rely on stack-specific configurations.\n\n**Technical details**\n- Typically represented as a string.\n- Format and length may vary depending on the stack management system.\n- Should be indexed or cached for efficient lookup during hook execution."},"configuration":{"type":"object","description":"Configuration settings for the post-aggregate hook that define its behavior and parameters during execution. This property allows customization of the hook's operation by specifying key-value pairs or structured data relevant to the hook's logic.\n\n**Field behavior**\n- Accepts structured data or key-value pairs that tailor the hook's functionality.\n- Influences how the post-aggregate hook processes data after aggregation.\n- Can be optional or required depending on the specific hook implementation.\n- Supports dynamic adjustment of hook behavior without code changes.\n\n**Implementation guidance**\n- Validate configuration inputs to ensure they conform to expected formats and types.\n- Document all configurable options clearly for users to understand their impact.\n- Allow for extensibility to accommodate future configuration parameters.\n- Ensure secure handling of configuration data to prevent injection or misuse.\n\n**Examples**\n- {\"threshold\": 10, \"mode\": \"strict\"}\n- {\"enableLogging\": true, \"retryCount\": 3}\n- {\"filters\": [\"status:active\", \"region:us-east\"]}\n\n**Important notes**\n- Incorrect configuration values may cause the hook to malfunction or produce unexpected results.\n- Configuration should be versioned or tracked to maintain consistency across deployments.\n- Sensitive information should not be included in configuration unless properly encrypted or secured.\n\n**Dependency chain**\n- Depends on the specific post-aggregate hook implementation to interpret configuration.\n- May interact with other hook properties such as conditions or triggers.\n- Configuration changes can affect downstream processing or output of the aggregation.\n\n**Technical details**\n- Typically represented as a JSON object or map structure.\n- Parsed and applied at runtime when the post-aggregate hook is invoked.\n- Supports nested structures for complex configuration scenarios.\n- Should be compatible with the overall API schema and validation rules."}},"description":"A hook function that is executed after the aggregate operation has completed. This hook allows for custom logic to be applied to the aggregated results before they are returned or further processed. It is typically used to modify, filter, or augment the aggregated data based on specific application requirements.\n\n**Field behavior**\n- Invoked immediately after the aggregate query execution.\n- Receives the aggregated results as input.\n- Can modify or replace the aggregated data.\n- Supports asynchronous operations if needed.\n- Does not affect the execution of the aggregate operation itself.\n\n**Implementation guidance**\n- Ensure the hook handles errors gracefully to avoid disrupting the response.\n- Use this hook to implement custom transformations or validations on aggregated data.\n- Avoid heavy computations to maintain performance.\n- Return the modified data or the original data if no changes are needed.\n- Consider security implications when modifying aggregated results.\n\n**Examples**\n- Filtering out sensitive fields from aggregated results.\n- Adding computed properties based on aggregation output.\n- Logging or auditing aggregated data for monitoring purposes.\n- Transforming aggregated data into a different format before sending to clients.\n\n**Important notes**\n- This hook is specific to aggregate operations and will not trigger on other query types.\n- Modifications in this hook do not affect the underlying database.\n- The hook should return the final data to be used downstream.\n- Properly handle asynchronous code to ensure the hook completes before response.\n\n**Dependency chain**\n- Triggered after the aggregate query execution phase.\n- Can influence the data passed to subsequent hooks or response handlers.\n- Dependent on the successful completion of the aggregate operation.\n\n**Technical details**\n- Typically implemented as a function accepting the aggregated results and context.\n- Supports both synchronous and asynchronous function signatures.\n- Receives parameters such as the aggregated data, query context, and hook metadata.\n- Expected to return the processed aggregated data or a promise resolving to it."}},"description":"Defines a collection of hook functions or callbacks that are triggered at specific points during the execution lifecycle of a process or operation. These hooks allow customization and extension of default behavior by executing user-defined logic before, after, or during certain events.\n\n**Field behavior**\n- Supports multiple hook functions, each associated with a particular event or lifecycle stage.\n- Hooks are invoked automatically by the system when their corresponding event occurs.\n- Can be used to modify input data, handle side effects, perform validations, or trigger additional processes.\n- Execution order of hooks may be sequential or parallel depending on implementation.\n\n**Implementation guidance**\n- Ensure hooks are registered with clear event names or identifiers.\n- Validate hook functions for correct signature and expected parameters.\n- Provide error handling within hooks to avoid disrupting the main process.\n- Document available hook points and expected behavior for each.\n- Allow hooks to be asynchronous if the environment supports it.\n\n**Examples**\n- A \"beforeSave\" hook that validates data before saving to a database.\n- An \"afterFetch\" hook that formats or enriches data after retrieval.\n- A \"beforeDelete\" hook that checks permissions before allowing deletion.\n- A \"onError\" hook that logs errors or sends notifications.\n\n**Important notes**\n- Hooks should not introduce significant latency or side effects that impact core functionality.\n- Avoid circular dependencies or infinite loops caused by hooks triggering each other.\n- Security considerations must be taken into account when executing user-defined hooks.\n- Hooks may have access to sensitive data; ensure proper access controls.\n\n**Dependency chain**\n- Hooks depend on the lifecycle events or triggers defined by the system.\n- Hook execution may depend on the successful completion of prior steps.\n- Hook results can influence subsequent processing stages or final outcomes.\n\n**Technical details**\n- Typically implemented as functions, methods, or callbacks registered in a map or list.\n- May support synchronous or asynchronous execution models.\n- Can accept parameters such as context objects, event data, or state information.\n- Return values from hooks may be used to modify behavior or data flow.\n- Often integrated with event emitter or observer patterns."},"sampleResponseData":{"type":"object","description":"Contains example data representing a typical response returned by the API endpoint. This sample response data helps developers understand the structure, format, and types of values they can expect when interacting with the API. It serves as a reference for testing, debugging, and documentation purposes.\n\n**Field behavior**\n- Represents a static example of the API's response payload.\n- Should reflect the most common or typical response scenario.\n- May include nested objects, arrays, and various data types to illustrate the response structure.\n- Does not affect the actual API behavior or response generation.\n\n**Implementation guidance**\n- Ensure the sample data is accurate and up-to-date with the current API response schema.\n- Include all required fields and typical optional fields to provide a comprehensive example.\n- Use realistic values that clearly demonstrate the data format and constraints.\n- Update the sample response whenever the API response structure changes.\n\n**Examples**\n- A JSON object showing user details returned from a user info endpoint.\n- An array of product items with fields like id, name, price, and availability.\n- A nested object illustrating a complex response with embedded metadata and links.\n\n**Important notes**\n- This data is for illustrative purposes only and should not be used as actual input or output.\n- It helps consumers of the API to quickly grasp the expected response format.\n- Should be consistent with the API specification and schema definitions.\n\n**Dependency chain**\n- Depends on the API endpoint's response schema and data model.\n- Should align with any validation rules or constraints defined for the response.\n- May be linked to example requests or other documentation elements for completeness.\n\n**Technical details**\n- Typically formatted as a JSON or XML snippet matching the API's response content type.\n- May include placeholder values or sample identifiers.\n- Should be parsable and valid according to the API's response schema."},"responseTransform":{"allOf":[{"description":"Data transformation configuration for reshaping API response data during import operations.\n\n**Import-specific behavior**\n\n**Response Transformation**: Transforms the response data returned by the destination system after records have been imported. This allows you to reshape, filter, or enrich the response before it is processed by downstream flow steps or returned to the client.\n\n**Common Use Cases**:\n- Extracting relevant fields from verbose API responses\n- Normalizing response formats across different destination systems\n- Adding computed fields based on response data\n- Filtering out sensitive information before logging or further processing\n"},{"$ref":"#/components/schemas/Transform"}]},"modelMetadata":{"type":"object","description":"Contains detailed metadata about the model, including its version, architecture, training data characteristics, and any relevant configuration parameters that define its behavior and capabilities. This information helps in understanding the model's provenance, performance expectations, and compatibility with various tasks or environments.\n**Field behavior**\n- Captures comprehensive details about the model's identity and configuration.\n- May include version numbers, architecture type, training dataset descriptions, and hyperparameters.\n- Used for tracking model lineage and ensuring reproducibility.\n- Supports validation and compatibility checks before deployment or usage.\n**Implementation guidance**\n- Ensure metadata is accurate and up-to-date with the current model iteration.\n- Include standardized fields for easy parsing and comparison across models.\n- Use consistent formatting and naming conventions for metadata attributes.\n- Allow extensibility to accommodate future metadata elements without breaking compatibility.\n**Examples**\n- Model version: \"v1.2.3\"\n- Architecture: \"Transformer-based, 12 layers, 768 hidden units\"\n- Training data: \"Dataset XYZ, 1 million samples, balanced classes\"\n- Hyperparameters: \"learning_rate=0.001, batch_size=32\"\n**Important notes**\n- Metadata should be immutable once the model is finalized to ensure traceability.\n- Sensitive information such as proprietary training data details should be handled carefully.\n- Metadata completeness directly impacts model governance and audit processes.\n- Incomplete or inaccurate metadata can lead to misuse or misinterpretation of the model.\n**Dependency chain**\n- Relies on accurate model training and version control systems.\n- Interacts with deployment pipelines that validate model compatibility.\n- Supports monitoring systems that track model performance over time.\n**Technical details**\n- Typically represented as a structured object or JSON document.\n- May include nested fields for complex metadata attributes.\n- Should be easily serializable and deserializable for storage and transmission.\n- Can be linked to external documentation or repositories for extended information."},"mapping":{"type":"object","properties":{"fields":{"type":"string","enum":["string","number","boolean","numberarray","stringarray","json"],"description":"Defines the collection of individual field mappings within the overall mapping configuration. Each entry specifies the characteristics, data types, and indexing options for a particular field in the dataset or document structure. This property enables precise control over how each field is interpreted, stored, and queried by the system.\n\n**Field behavior**\n- Contains a set of key-value pairs where each key is a field name and the value is its mapping definition.\n- Determines how data in each field is processed, indexed, and searched.\n- Supports nested fields and complex data structures.\n- Can include settings such as data type, analyzers, norms, and indexing options.\n\n**Implementation guidance**\n- Define all relevant fields explicitly to optimize search and storage behavior.\n- Use consistent naming conventions for field names.\n- Specify appropriate data types to ensure correct parsing and querying.\n- Include nested mappings for objects or arrays as needed.\n- Validate field definitions to prevent conflicts or errors.\n\n**Examples**\n- Mapping a text field with a custom analyzer.\n- Defining a date field with a specific format.\n- Specifying a keyword field for exact match searches.\n- Creating nested object fields with their own sub-fields.\n\n**Important notes**\n- Omitting fields may lead to default dynamic mapping behavior, which might not be optimal.\n- Incorrect field definitions can cause indexing errors or unexpected query results.\n- Changes to field mappings often require reindexing of existing data.\n- Field names should avoid reserved characters or keywords.\n\n**Dependency chain**\n- Depends on the overall mapping configuration context.\n- Influences indexing and search components downstream.\n- Interacts with analyzers, tokenizers, and query parsers.\n\n**Technical details**\n- Typically represented as a JSON or YAML object with field names as keys.\n- Each field mapping includes properties like \"type\", \"index\", \"analyzer\", \"fields\", etc.\n- Supports complex types such as objects, nested, geo_point, and geo_shape.\n- May include metadata fields for internal use."},"lists":{"type":"string","enum":["string","number","boolean","numberarray","stringarray"],"description":"A collection of lists that define specific groupings or categories used within the mapping context. Each list contains a set of related items or values that are referenced to organize, filter, or map data effectively. These lists facilitate structured data handling and improve the clarity and maintainability of the mapping configuration.\n\n**Field behavior**\n- Contains multiple named lists, each representing a distinct category or grouping.\n- Lists can be referenced elsewhere in the mapping to apply consistent logic or transformations.\n- Supports dynamic or static content depending on the mapping requirements.\n- Enables modular and reusable data definitions within the mapping.\n\n**Implementation guidance**\n- Ensure each list has a unique identifier or name for clear referencing.\n- Populate lists with relevant and validated items to avoid mapping errors.\n- Use lists to centralize repeated values or categories to simplify updates.\n- Consider the size and complexity of lists to maintain performance and readability.\n\n**Examples**\n- A list of country codes used for regional mapping.\n- A list of product categories for classification purposes.\n- A list of status codes to standardize state representation.\n- A list of user roles for access control mapping.\n\n**Important notes**\n- Lists should be kept up-to-date to reflect current data requirements.\n- Avoid duplication of items across different lists unless intentional.\n- The structure and format of list items must align with the overall mapping schema.\n- Changes to lists may impact dependent mapping logic; test thoroughly after updates.\n\n**Dependency chain**\n- Lists may depend on external data sources or configuration files.\n- Other mapping properties or rules may reference these lists for validation or transformation.\n- Updates to lists can cascade to affect downstream processing or output.\n\n**Technical details**\n- Typically represented as arrays or collections within the mapping schema.\n- Items within lists can be simple values (strings, numbers) or complex objects.\n- Supports nesting or hierarchical structures if the schema allows.\n- May include metadata or annotations to describe list purpose or usage."}},"description":"Defines the association between input data fields and their corresponding target fields or processing rules within the system. This mapping specifies how data should be transformed, routed, or interpreted during processing to ensure accurate and consistent handling. It serves as a blueprint for data translation and integration tasks, enabling seamless interoperability between different data formats or components.\n\n**Field behavior**\n- Maps source fields to target fields or processing instructions.\n- Determines how input data is transformed or routed.\n- Can include nested or hierarchical mappings for complex data structures.\n- Supports conditional or dynamic mapping rules based on input values.\n\n**Implementation guidance**\n- Ensure mappings are clearly defined and validated to prevent data loss or misinterpretation.\n- Use consistent naming conventions for source and target fields.\n- Support extensibility to accommodate new fields or transformation rules.\n- Provide mechanisms for error handling when mappings fail or are incomplete.\n\n**Examples**\n- Mapping a JSON input field \"user_name\" to a database column \"username\".\n- Defining a transformation rule that converts date formats from \"MM/DD/YYYY\" to \"YYYY-MM-DD\".\n- Routing data from an input field \"status\" to different processing modules based on its value.\n\n**Important notes**\n- Incorrect or incomplete mappings can lead to data corruption or processing errors.\n- Mapping definitions should be version-controlled to track changes over time.\n- Consider performance implications when applying complex or large-scale mappings.\n\n**Dependency chain**\n- Dependent on the structure and schema of input data.\n- Influences downstream data processing, validation, and storage components.\n- May interact with transformation, validation, and routing modules.\n\n**Technical details**\n- Typically represented as key-value pairs, dictionaries, or mapping objects.\n- May support expressions or functions for dynamic value computation.\n- Can be serialized in formats such as JSON, YAML, or XML for configuration.\n- Should include metadata for data types, optionality, and default values where applicable."},"mappings":{"allOf":[{"description":"Field mapping configuration for transforming incoming records during import operations.\n\n**Import-specific behavior**\n\n**Data Transformation**: Mappings define how fields from incoming records are transformed and mapped to the destination system's field structure. This enables data normalization, field renaming, value transformation, and complex nested object construction.\n\n**Common Use Cases**:\n- Mapping source field names to destination field names\n- Transforming data types and formats\n- Building nested object structures required by the destination\n- Applying formulas and lookups to derive field values\n"},{"$ref":"#/components/schemas/Mappings"}]},"lookups":{"type":"string","description":"A collection of predefined reference data or mappings used to standardize and validate input values within the system. Lookups typically consist of key-value pairs or enumerations that facilitate consistent data interpretation and reduce errors by providing a controlled vocabulary or set of options.\n\n**Field behavior**\n- Serves as a centralized source for reference data used across various components.\n- Enables validation and normalization of input values against predefined sets.\n- Supports dynamic retrieval of lookup values to accommodate updates without code changes.\n- May include hierarchical or categorized data structures for complex reference data.\n\n**Implementation guidance**\n- Ensure lookups are comprehensive and cover all necessary reference data for the application domain.\n- Design lookups to be easily extendable and maintainable, allowing additions or modifications without impacting existing functionality.\n- Implement caching mechanisms if lookups are frequently accessed to optimize performance.\n- Provide clear documentation for each lookup entry to aid developers and users in understanding their purpose.\n\n**Examples**\n- Country codes and names (e.g., \"US\" -> \"United States\").\n- Status codes for order processing (e.g., \"PENDING\", \"SHIPPED\", \"DELIVERED\").\n- Currency codes and symbols (e.g., \"USD\" -> \"$\").\n- Product categories or types used in an inventory system.\n\n**Important notes**\n- Lookups should be kept up-to-date to reflect changes in business rules or external standards.\n- Avoid hardcoding lookup values in multiple places; centralize them to maintain consistency.\n- Consider localization requirements if lookup values need to be presented in multiple languages.\n- Validate input data against lookups to prevent invalid or unsupported values from entering the system.\n\n**Dependency chain**\n- Often used by input validation modules, UI dropdowns, reporting tools, and business logic components.\n- May depend on external data sources or configuration files for initialization.\n- Can influence downstream processing and data storage by enforcing standardized values.\n\n**Technical details**\n- Typically implemented as dictionaries, maps, or database tables.\n- May support versioning to track changes over time.\n- Can be exposed via APIs for dynamic retrieval by client applications.\n- Should include mechanisms for synchronization if distributed across multiple systems."},"settingsForm":{"$ref":"#/components/schemas/Form"},"preSave":{"$ref":"#/components/schemas/PreSave"},"settings":{"$ref":"#/components/schemas/Settings"}}},"As2":{"type":"object","description":"Configuration for As2 imports","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"The unique identifier for the Trading Partner (TP) Connector utilized within the AS2 communication framework. This identifier serves as a critical reference that links the AS2 configuration to the specific connector responsible for the secure transmission and reception of AS2 messages. It ensures that all message exchanges are accurately routed through the designated connector, thereby maintaining the integrity, security, and reliability of the communication process. This ID is fundamental for establishing, managing, and terminating AS2 sessions, enabling proper encryption, digital signing, and delivery confirmation of messages. It acts as a key element in the orchestration of AS2 workflows, ensuring seamless interoperability between trading partners.\n\n**Field behavior**\n- Uniquely identifies a TP Connector within the trading partner ecosystem.\n- Associates AS2 message exchanges with the appropriate connector configuration.\n- Essential for initiating, maintaining, and terminating AS2 communication sessions.\n- Immutable once set to preserve consistent routing and session management.\n- Validated against existing connectors to prevent misconfiguration.\n\n**Implementation guidance**\n- Must correspond to a valid, active connector ID registered in the trading partner management system.\n- Should be assigned during initial AS2 setup and remain unchanged unless a deliberate reconfiguration is required.\n- Implement validation checks to confirm the ID’s existence and compatibility with AS2 protocols before assignment.\n- Ensure synchronization between the connector ID and its configuration to avoid communication failures.\n- Handle updates cautiously, with appropriate notifications and fallback mechanisms to maintain session continuity.\n\n**Examples**\n- \"connector-12345\"\n- \"tp-connector-67890\"\n- \"as2-connector-001\"\n\n**Important notes**\n- Incorrect or invalid IDs can lead to failed message transmissions or misrouted communications.\n- The referenced connector must be fully configured to support AS2 standards, including security and protocol settings.\n- Changes to this identifier should be managed through controlled processes to prevent disruption of active AS2 sessions.\n- This ID is integral to the AS2 message lifecycle, impacting message encryption, signing, and delivery confirmation.\n\n**Dependency chain**\n- Relies on the existence and proper configuration of a TP Connector entity within the trading partner management system.\n- Interacts closely with AS2 configuration parameters that govern message handling, security credentials, and session management.\n- Dependent on system components responsible for connector registration, validation, and lifecycle management.\n\n**Technical details**\n- Represented as a string conforming to the identifier format defined by the trading partner management system.\n- Used internally by the AS2"},"fileNameTemplate":{"type":"string","description":"fileNameTemplate specifies the template string used to dynamically generate file names for AS2 message payloads during both transmission and receipt processes. This template allows the inclusion of customizable placeholders that are replaced at runtime with relevant contextual values such as timestamps, unique message identifiers, sender and receiver IDs, or other metadata. By leveraging these placeholders, the system ensures that generated file names are unique, descriptive, and meaningful, facilitating efficient file management, traceability, and downstream processing. The template is crucial for systematically organizing files, preventing naming conflicts, supporting audit trails, and enabling seamless integration with various file storage and transfer mechanisms.\n\n**Field behavior**\n- Defines the naming pattern and structure for AS2 message payload files.\n- Supports dynamic substitution of placeholders with runtime metadata values.\n- Ensures uniqueness and clarity in generated file names to prevent collisions.\n- Directly influences file storage organization, retrieval efficiency, and processing workflows.\n- Affects audit trails and logging by enforcing consistent and meaningful file naming conventions.\n- Applies uniformly to both outgoing and incoming AS2 message payloads to maintain consistency.\n- Allows flexibility to adapt file naming to organizational standards and compliance requirements.\n\n**Implementation guidance**\n- Use clear, descriptive placeholders such as {timestamp}, {messageId}, {senderId}, {receiverId}, and {date} to capture relevant metadata.\n- Validate templates to exclude invalid or unsupported characters based on the target file system’s constraints.\n- Incorporate date and time components to improve uniqueness and enable chronological sorting of files.\n- Explicitly include file extensions (e.g., .xml, .edi, .txt) to reflect the payload format and ensure compatibility with downstream systems.\n- Provide sensible default templates to maintain system functionality when custom templates are not specified.\n- Implement robust parsing and substitution logic to reliably handle all supported placeholders and edge cases.\n- Sanitize substituted values to prevent injection of unsafe, invalid, or reserved characters that could cause errors.\n- Consider locale and timezone settings when generating date/time placeholders to ensure consistency across systems.\n- Support escape sequences or delimiter customization to handle special characters within templates if necessary.\n\n**Examples**\n- \"AS2_{senderId}_{timestamp}.xml\"\n- \"{messageId}_{receiverId}_{date}.edi\"\n- \"payload_{timestamp}.txt\"\n- \"msg_{date}_{senderId}_{receiverId}_{messageId}.xml\"\n- \"AS2_{timestamp}_{messageId}.payload\"\n- \"inbound_{receiverId}_{date}_{messageId}.xml\"\n-"},"messageIdTemplate":{"type":"string","description":"messageIdTemplate is a customizable template string designed to generate the Message-ID header for AS2 messages, enabling the creation of unique, meaningful, and standards-compliant identifiers for each message. This template supports dynamic placeholders that are replaced with actual runtime values—such as timestamps, UUIDs, sequence numbers, sender-specific information, or other contextual data—ensuring that every Message-ID is globally unique and strictly adheres to RFC 5322 email header formatting standards. By defining the precise format of the Message-ID, this property plays a critical role in message tracking, logging, troubleshooting, auditing, and interoperability within AS2 communication workflows, facilitating reliable message correlation and lifecycle management.\n\n**Field behavior**\n- Specifies the exact format and structure of the Message-ID header in AS2 protocol messages.\n- Supports dynamic placeholders that are substituted with real-time values during message creation.\n- Guarantees global uniqueness of each Message-ID to prevent duplication and enable accurate message identification.\n- Directly influences message correlation, tracking, diagnostics, and auditing processes across AS2 systems.\n- Ensures compliance with email header syntax and AS2 protocol requirements to maintain interoperability.\n- Affects downstream processing by systems that rely on Message-ID for message lifecycle management.\n\n**Implementation guidance**\n- Use a clear and consistent placeholder syntax (e.g., `${placeholderName}`) for dynamic content insertion.\n- Incorporate unique elements such as UUIDs, timestamps, sequence numbers, sender domains, or other identifiers to guarantee global uniqueness.\n- Validate the generated Message-ID string against AS2 protocol specifications and RFC 5322 email header standards.\n- Ensure the final Message-ID is properly formatted, enclosed in angle brackets (`< >`), and free from invalid or disallowed characters.\n- Avoid embedding sensitive or confidential information within the Message-ID to mitigate potential security risks.\n- Thoroughly test the template to confirm compatibility with receiving AS2 systems and message processing components.\n- Consider the impact of the template on downstream systems that rely on Message-ID for message correlation and tracking.\n- Maintain consistency in template usage across environments to support reliable message auditing and troubleshooting.\n\n**Examples**\n- `<${uuid}@example.com>` — generates a Message-ID combining a UUID with a domain name.\n- `<${timestamp}.${sequence}@as2sender.com>` — includes a timestamp and sequence number for ordered uniqueness.\n- `<msg-${messageNumber}@${senderDomain}>` — uses message number and sender domain placeholders for traceability.\n- `<${date}-${uuid}@company"},"maxRetries":{"type":"number","description":"Maximum number of retry attempts for the AS2 message transmission in case of transient failures such as network errors, timeouts, or temporary unavailability of the recipient system. This setting controls how many times the system will automatically attempt to resend a message after an initial failure before marking the transmission as failed. It applies exclusively to retry attempts following the first transmission and does not influence the initial send operation.\n\n**Field behavior**\n- Specifies the total count of resend attempts after the initial transmission failure.\n- Retries are triggered only for recoverable, transient errors where subsequent attempts have a reasonable chance of success.\n- Retry attempts cease once the configured maximum number is reached, resulting in a final failure status.\n- Does not affect the initial message send; governs only the retry logic after the first failure.\n- Retry attempts are typically spaced out using backoff strategies to prevent overwhelming the network or recipient system.\n- Each retry is initiated only after the previous attempt has conclusively failed.\n- The retry mechanism respects configured backoff intervals and timing constraints to optimize network usage and avoid congestion.\n\n**Implementation guidance**\n- Select a balanced default value to avoid excessive retries that could cause delays or strain system resources.\n- Implement exponential backoff or incremental delay strategies between retries to reduce network congestion and improve success rates.\n- Ensure retry attempts comply with overall operation timeouts and do not exceed system or protocol limits.\n- Validate input to accept only non-negative integers; setting zero disables retries entirely.\n- Integrate retry logic with error detection, logging, and alerting mechanisms for comprehensive failure handling and monitoring.\n- Consider the impact of retries on message ordering and idempotency to prevent duplicate processing or inconsistent states.\n- Provide clear feedback and detailed logging for each retry attempt to facilitate troubleshooting and operational transparency.\n\n**Examples**\n- maxRetries: 3 — The system will attempt to resend the message up to three times after the initial failure before giving up.\n- maxRetries: 0 — No retries will be performed; the failure is reported immediately after the first unsuccessful attempt.\n- maxRetries: 5 — Allows up to five retry attempts, increasing the chance of successful delivery in unstable network conditions.\n- maxRetries: 1 — A single retry attempt is made, suitable for environments with low tolerance for delays.\n\n**Important notes**\n- Excessive retry attempts can increase latency and consume additional system and network resources.\n- Retry behavior should align with AS2 protocol standards and best practices to maintain compliance and interoperability.\n- This setting improves"},"headers":{"type":"object","properties":{"type":{"type":"string","description":"The type property specifies the MIME type of the AS2 message content, indicating the format of the data being transmitted. This helps the receiving system understand how to process and interpret the message payload.\n\n**Field behavior**\n- Defines the content type of the AS2 message payload.\n- Used by the receiver to determine how to parse and handle the message.\n- Typically corresponds to standard MIME types such as \"application/edi-x12\", \"application/edi-consent\", or \"application/xml\".\n- May influence processing rules, such as encryption, compression, or validation steps.\n\n**Implementation guidance**\n- Ensure the value is a valid MIME type string.\n- Use standard MIME types relevant to AS2 transmissions.\n- Avoid custom or non-standard MIME types unless both sender and receiver explicitly support them.\n- Validate the type value against expected content formats in the AS2 communication agreement.\n\n**Examples**\n- \"application/edi-x12\"\n- \"application/edi-consent\"\n- \"application/xml\"\n- \"text/plain\"\n\n**Important notes**\n- The type field is critical for interoperability between AS2 trading partners.\n- Incorrect or missing type values can lead to message processing errors or rejection.\n- This field is part of the AS2 message headers and should conform to AS2 protocol specifications.\n\n**Dependency chain**\n- Depends on the content of the AS2 message payload.\n- Influences downstream processing components that handle message parsing and validation.\n- May be linked with other headers such as content-transfer-encoding or content-disposition.\n\n**Technical details**\n- Represented as a string in MIME type format.\n- Included in the AS2 message headers under the \"Content-Type\" header.\n- Should comply with RFC 2045 and RFC 2046 MIME standards."},"value":{"type":"string","description":"The value of the AS2 header, representing the content associated with the specified header name.\n\n**Field behavior**\n- Holds the actual content or data for the AS2 header identified by the corresponding header name.\n- Can be a string or any valid data type supported by the AS2 protocol for header values.\n- Used during the construction or parsing of AS2 messages to specify header details.\n\n**Implementation guidance**\n- Ensure the value conforms to the expected format and encoding for AS2 headers.\n- Validate the value against any protocol-specific constraints or length limits.\n- When setting this value, consider the impact on message integrity and compliance with AS2 standards.\n- Support dynamic assignment to accommodate varying header requirements.\n\n**Examples**\n- \"application/edi-x12\"\n- \"12345\"\n- \"attachment; filename=\\\"invoice.xml\\\"\"\n- \"UTF-8\"\n\n**Important notes**\n- The value must be compatible with the corresponding header name to maintain protocol correctness.\n- Incorrect or malformed values can lead to message rejection or processing errors.\n- Some headers may require specific formatting or encoding (e.g., MIME types, character sets).\n\n**Dependency chain**\n- Dependent on the `as2.headers.name` property which specifies the header name.\n- Influences the overall AS2 message headers structure and processing logic.\n\n**Technical details**\n- Typically represented as a string in the AS2 message headers.\n- May require encoding or escaping depending on the header content.\n- Should comply with RFC 4130 and related AS2 specifications for header formatting."},"_id":{"type":"object","description":"Unique identifier for the AS2 message header.\n\n**Field behavior**\n- Serves as a unique key to identify the AS2 message header within the system.\n- Typically assigned automatically by the system or provided by the client to ensure message traceability.\n- Used to correlate messages and track message processing status.\n\n**Implementation guidance**\n- Should be a string value that uniquely identifies the header.\n- Must be immutable once assigned to prevent inconsistencies.\n- Should follow a consistent format (e.g., UUID, GUID) to avoid collisions.\n- Ensure uniqueness across all AS2 message headers in the system.\n\n**Examples**\n- \"123e4567-e89b-12d3-a456-426614174000\"\n- \"AS2Header_001\"\n- \"msg-header-20240427-0001\"\n\n**Important notes**\n- This field is critical for message tracking and auditing.\n- Avoid using easily guessable or sequential values to enhance security.\n- If not provided, the system should generate a unique identifier automatically.\n\n**Dependency chain**\n- May be referenced by other components or logs that track message processing.\n- Related to message metadata and processing status fields.\n\n**Technical details**\n- Data type: string\n- Format: typically UUID or a unique string identifier\n- Immutable after creation\n- Indexed for efficient lookup in databases or message stores"},"default":{"type":["object","null"],"description":"A boolean property that specifies whether the current header should be used as the default header in the AS2 message configuration. When set to true, this header will be applied automatically unless overridden by a more specific header setting.\n\n**Field behavior**\n- Determines if the header is the default choice for AS2 message headers.\n- If true, this header is applied automatically in the absence of other specific header configurations.\n- If false or omitted, the header is not considered the default and must be explicitly specified to be used.\n\n**Implementation guidance**\n- Use this property to designate a fallback or standard header configuration for AS2 messages.\n- Ensure only one header is marked as default to avoid conflicts.\n- Validate the boolean value to accept only true or false.\n- Consider the impact on message routing and processing when setting this property.\n\n**Examples**\n- `default: true` — This header is the default and will be used unless another header is specified.\n- `default: false` — This header is not the default and must be explicitly selected.\n\n**Important notes**\n- Setting multiple headers as default may lead to unpredictable behavior.\n- The default header is applied only when no other header overrides are present.\n- This property is optional; if omitted, the header is not treated as default.\n\n**Dependency chain**\n- Related to other header properties within `as2.headers`.\n- Influences message header selection logic in AS2 message processing.\n- May interact with routing or profile configurations that specify header usage.\n\n**Technical details**\n- Data type: boolean\n- Default value: false (if omitted)\n- Located under the `as2.headers` object in the configuration schema\n- Used during AS2 message construction to determine header application"}},"description":"A collection of HTTP headers included in the AS2 message transmission, serving as essential metadata and control information that governs the accurate handling, routing, authentication, and processing of the AS2 message between trading partners. These headers must strictly adhere to standard HTTP header formats and encoding rules, encompassing mandatory fields such as Content-Type, Message-ID, AS2-From, AS2-To, and other protocol-specific headers critical for AS2 communication. Proper configuration and validation of these headers are vital to maintaining message integrity, security (including encryption and digital signatures), and ensuring successful interoperability within the AS2 ecosystem. This collection supports both standard and custom headers as required by specific trading partner agreements or business needs, with header names treated as case-insensitive per HTTP standards.  \n**Field behavior:**  \n- Represents key-value pairs of HTTP headers transmitted alongside the AS2 message.  \n- Directly influences message routing, identification, authentication, security, and processing by the receiving system.  \n- Supports inclusion of both mandatory AS2 headers and additional custom headers as dictated by trading partner agreements or specific business requirements.  \n- Header names are case-insensitive, consistent with HTTP/1.1 specifications.  \n- Facilitates communication of protocol-specific instructions, security parameters (e.g., MIC, Disposition-Notification-To), and message metadata essential for AS2 transactions.  \n- Enables conveying information necessary for message encryption, signing, and receipt acknowledgments.  \n**Implementation guidance:**  \n- Ensure all header names and values conform strictly to HTTP/1.1 header syntax, character encoding, and escaping rules.  \n- Include all mandatory AS2 headers such as AS2-From, AS2-To, Message-ID, Content-Type, and security-related headers like MIC and Disposition-Notification-To.  \n- Avoid duplicate headers unless explicitly allowed by the AS2 specification or trading partner agreements.  \n- Validate header values rigorously for correctness, format compliance, and security to prevent injection attacks or protocol violations.  \n- Consider the impact of headers on message security features including encryption, digital signatures, and receipt acknowledgments.  \n- Maintain consistency with trading partner agreements regarding required, optional, and custom headers to ensure seamless interoperability.  \n- Use appropriate character encoding and escape sequences to handle special or non-ASCII characters within header values.  \n**Examples:**  \n- Content-Type: application/edi-x12  \n- AS2-From: PartnerA  \n- AS2-To: PartnerB  \n- Message-ID: <"}}},"Dynamodb":{"type":"object","description":"Configuration for Dynamodb exports","properties":{"region":{"type":"string","description":"Specifies the AWS region where the DynamoDB service is hosted and where all database operations will be executed. This property defines the precise geographical location of the DynamoDB instance, which is essential for optimizing application latency, ensuring compliance with data residency and sovereignty regulations, and aligning with organizational infrastructure strategies. Selecting the appropriate region helps minimize network latency by placing data physically closer to end-users or application servers, thereby improving performance and responsiveness. Additionally, the chosen region influences service availability, pricing structures, feature sets, and legal compliance requirements, making it a critical configuration parameter. The region value must be a valid AWS region identifier (such as \"us-east-1\" or \"eu-west-1\") and is used to construct the correct service endpoint URL for all DynamoDB interactions. Proper configuration of this property is vital for seamless integration with AWS SDKs, authentication mechanisms, and IAM policies that may be region-specific, ensuring secure and efficient access to DynamoDB resources.\n\n**Field behavior**\n- Determines the AWS data center location for all DynamoDB operations.\n- Directly impacts network latency, data residency, and compliance adherence.\n- Must be specified as a valid and supported AWS region code.\n- Influences the endpoint URL, authentication endpoints, and service routing.\n- Affects service availability, pricing, feature availability, and latency.\n- Remains consistent throughout the lifecycle of the application unless explicitly changed.\n\n**Implementation guidance**\n- Validate the region against the current list of supported AWS regions to prevent misconfiguration and runtime errors.\n- Use the region value to dynamically construct the DynamoDB service endpoint URL and configure SDK clients accordingly.\n- Allow the region to be configurable and overrideable to support testing environments, multi-region deployments, disaster recovery, or failover scenarios.\n- Ensure consistency of the region setting across all AWS service configurations within the application to avoid cross-region conflicts and data inconsistency.\n- Consider the implications of region selection on data sovereignty laws, compliance requirements, and organizational policies.\n- Monitor AWS announcements for new regions or changes in service availability to keep configurations up to date.\n\n**Examples**\n- \"us-east-1\" (Northern Virginia, USA)\n- \"eu-west-1\" (Ireland, Europe)\n- \"ap-southeast-2\" (Sydney, Australia)\n- \"ca-central-1\" (Canada Central)\n- \"sa-east-1\" (São Paulo, Brazil)\n\n**Important notes**\n- Selecting the correct region is crucial for optimizing application performance, cost efficiency,"},"method":{"type":"string","enum":["putItem","updateItem"],"description":"Method specifies the HTTP method to be used for the DynamoDB API request, such as GET, POST, PUT, DELETE, etc. This determines the type of operation to be performed on the DynamoDB service and how the request is processed. It defines the HTTP verb that dictates the action on the resource, typically aligning with CRUD operations (Create, Read, Update, Delete). While DynamoDB primarily uses POST requests for most operations, other methods may be applicable in limited scenarios. The chosen method directly impacts the request structure, headers, payload formatting, and response handling, ensuring that the request conforms to the expected protocol and operation semantics.\n\n**Field behavior**\n- Defines the HTTP verb for the API call, influencing the nature and intent of the operation.\n- Determines how the DynamoDB service interprets and processes the incoming request.\n- Typically corresponds to CRUD operations but is predominantly POST for DynamoDB API interactions.\n- Must be compatible with the DynamoDB API endpoint and the specific operation requirements.\n- Influences the construction of the HTTP request line, headers, and payload format.\n- Affects response handling and error processing based on the HTTP method semantics.\n\n**Implementation guidance**\n- Validate that the method value is one of the HTTP methods supported by DynamoDB, with POST being the standard and recommended method.\n- Use POST for the majority of DynamoDB operations, including queries, updates, and inserts.\n- Ensure the selected method aligns precisely with the intended DynamoDB operation to avoid protocol mismatches.\n- Handle method-specific headers, payload formats, and any protocol nuances, such as content-type and authorization headers.\n- Avoid unsupported or non-standard HTTP methods to prevent request failures or unexpected behavior.\n- Implement robust error handling for cases where the method is incompatible with the requested operation.\n\n**Examples**\n- POST (commonly used and recommended for all DynamoDB API operations)\n- GET (rarely used, potentially for specific read-only or diagnostic operations, though not standard for DynamoDB)\n- DELETE (used for deleting items or tables in DynamoDB, though typically encapsulated within POST requests)\n- PUT (generally not used in the DynamoDB API context but common in other RESTful APIs)\n\n**Important notes**\n- DynamoDB API predominantly supports POST; other HTTP methods may be unsupported or have limited applicability.\n- Using an incorrect HTTP method can cause request rejection, errors, or unintended side effects.\n- The method must strictly adhere to the DynamoDB API specification and AWS service requirements to ensure successful communication"},"tableName":{"type":"string","description":"The name of the DynamoDB table to be accessed or manipulated, serving as the primary identifier for routing API requests to the correct data store. This string must exactly match the name of an existing DynamoDB table within the specified AWS account and region, including case sensitivity. It is essential for performing operations such as reading, writing, updating, or deleting items within the table, ensuring that all data interactions target the intended resource accurately.\n\n**Field behavior**\n- Specifies the exact DynamoDB table targeted for all data operations.\n- Must be unique within the AWS account and region to prevent ambiguity.\n- Directs API calls to the appropriate table for the requested action.\n- Case-sensitive and must strictly adhere to DynamoDB naming conventions.\n- Acts as a critical routing parameter in API requests to identify the data source.\n\n**Implementation guidance**\n- Confirm the table name matches the existing DynamoDB table in AWS, respecting case sensitivity.\n- Avoid using reserved words or disallowed special characters to prevent API errors.\n- Validate the table name format programmatically before making API calls.\n- Manage table names through environment variables or configuration files to facilitate deployment across multiple environments (development, staging, production).\n- Ensure all application components referencing the table name are updated consistently if the table name changes.\n- Incorporate error handling to manage cases where the specified table does not exist or is inaccessible.\n\n**Examples**\n- \"Users\"\n- \"Orders2024\"\n- \"Inventory_Table\"\n- \"CustomerData\"\n- \"Sales-Data.2024\"\n\n**Important notes**\n- DynamoDB table names can be up to 255 characters in length.\n- Table names are case-sensitive and must be used consistently across all API calls.\n- The specified table must exist in the targeted AWS region before any operation is performed.\n- Changing the table name requires updating all dependent code and redeploying applications as necessary.\n- Incorrect table names will result in API errors such as ResourceNotFoundException.\n\n**Dependency chain**\n- Dependent on the AWS account and region context where DynamoDB is accessed.\n- Requires appropriate IAM permissions to perform operations on the specified table.\n- Works in conjunction with other DynamoDB parameters such as key schema, attribute definitions, and provisioned throughput settings.\n- Relies on network connectivity and AWS service availability for successful API interactions.\n\n**Technical details**\n- Represented as a string value.\n- Must conform to DynamoDB naming rules: allowed characters include alphanumeric characters, underscore (_), hy"},"partitionKey":{"type":"string","description":"partitionKey specifies the attribute name used as the primary partition key in a DynamoDB table. This key is essential for uniquely identifying each item within the table and determines the partition where the data is physically stored. It plays a fundamental role in data distribution, scalability, and query efficiency by enabling DynamoDB to hash the key value and allocate items across multiple storage partitions. The partition key must be present in every item and is required for all operations involving item creation, retrieval, update, and deletion. Selecting an appropriate partition key with high cardinality ensures even data distribution, prevents performance bottlenecks, and supports efficient query patterns. When combined with an optional sort key, it forms a composite primary key that enables more complex data models and range queries.\n\n**Field behavior**\n- Defines the primary attribute used to partition and organize data in a DynamoDB table.\n- Must have unique values within the context of the partition to ensure item uniqueness.\n- Drives the distribution of data across partitions to optimize performance and scalability.\n- Required for all item-level operations such as put, get, update, and delete.\n- Works in tandem with an optional sort key to form a composite primary key for more complex data models.\n\n**Implementation guidance**\n- Select an attribute with a high cardinality (many distinct values) to promote even data distribution and avoid hot partitions.\n- Ensure the partition key attribute is included in every item stored in the table.\n- Analyze application access patterns to choose a partition key that supports efficient queries and minimizes read/write contention.\n- Use a composite key (partition key + sort key) when you need to model one-to-many relationships or perform range queries.\n- Avoid using attributes with low variability or skewed distributions as partition keys to prevent performance bottlenecks.\n\n**Examples**\n- \"userId\" for applications where data is organized by individual users.\n- \"orderId\" in an e-commerce system to uniquely identify each order.\n- \"deviceId\" for storing telemetry data from IoT devices.\n- \"customerEmail\" when email addresses serve as unique identifiers.\n- \"regionCode\" combined with a sort key for geographic data partitioning.\n\n**Important notes**\n- The partition key is mandatory and immutable after table creation; changing it requires creating a new table.\n- The attribute type must be one of DynamoDB’s supported scalar types: String, Number, or Binary.\n- Partition key values are hashed internally by DynamoDB to determine the physical partition.\n- Using a poorly chosen partition key"},"sortKey":{"type":"string","description":"The attribute name designated as the sort key (also known as the range key) in a DynamoDB table. This key is essential for ordering and organizing items that share the same partition key, enabling efficient sorting, range queries, and precise retrieval of data within a partition. Together with the partition key, the sort key forms the composite primary key that uniquely identifies each item in the table. It plays a critical role in query operations by allowing filtering, sorting, and conditional retrieval based on its values, which can be of type String, Number, or Binary. Proper definition and usage of the sort key significantly enhance data access patterns, query performance, and cost efficiency in DynamoDB.\n\n**Field behavior**\n- Defines the attribute DynamoDB uses to sort and organize items within the same partition key.\n- Enables efficient range queries, ordered retrieval, and filtering of items.\n- Must be unique in combination with the partition key for each item to ensure item uniqueness.\n- Supports composite key structures when combined with the partition key.\n- Influences the physical storage order of items within a partition, optimizing query performance.\n\n**Implementation guidance**\n- Specify the attribute name exactly as defined in the DynamoDB table schema.\n- Ensure the attribute type matches the table definition (String, Number, or Binary).\n- Use this key to perform queries requiring sorting, range-based filtering, or conditional retrieval.\n- Omit or set to null if the DynamoDB table does not define a sort key.\n- Validate that the sort key attribute exists and is properly indexed in the table schema before use.\n- Consider the sort key design carefully to support intended query patterns and access efficiency.\n\n**Examples**\n- \"timestamp\"\n- \"createdAt\"\n- \"orderId\"\n- \"category#date\"\n- \"eventDate\"\n- \"userScore\"\n\n**Important notes**\n- The sort key is optional; some DynamoDB tables only have a partition key without a sort key.\n- Queries specifying both partition key and sort key conditions are more efficient and cost-effective.\n- The sort key attribute must be pre-defined in the table schema and cannot be added dynamically.\n- The combination of partition key and sort key must be unique for each item.\n- Sort key values influence the physical storage order of items within a partition, impacting query speed.\n- Using a well-designed sort key can reduce the need for additional indexes and improve scalability.\n\n**Dependency chain**\n- Requires a DynamoDB table configured with a composite primary key (partition key"},"itemDocument":{"type":"string","description":"The `itemDocument` property represents the complete and authoritative data record stored within a DynamoDB table. It is structured as a JSON object that comprehensively includes all attribute names and their corresponding values, fully reflecting the schema and data model of the DynamoDB item. This property encapsulates every attribute of the item, including mandatory primary keys, optional sort keys (if applicable), and any additional data fields, supporting complex nested objects, maps, lists, and arrays to accommodate DynamoDB’s rich data types. It serves as the primary representation of an item for all item-level operations such as reading (GetItem), writing (PutItem), updating (UpdateItem), and deleting (DeleteItem) within DynamoDB, ensuring data integrity and consistency throughout these processes.\n\n**Field behavior**\n- Contains the full and exact representation of a DynamoDB item as a structured JSON document.\n- Includes all required attributes such as primary keys and sort keys, as well as any optional or additional fields.\n- Supports complex nested data structures including maps, lists, and sets, consistent with DynamoDB’s flexible data model.\n- Acts as the main payload for CRUD (Create, Read, Update, Delete) operations on DynamoDB items.\n- Reflects either the current stored state or the intended new state of the item depending on the operation context.\n- Can represent partial documents when used with projection expressions or update operations that modify subsets of attributes.\n\n**Implementation guidance**\n- Ensure strict adherence to DynamoDB’s supported data types (String, Number, Binary, Boolean, Null, List, Map, Set) and attribute naming conventions.\n- Validate presence and correct formatting of primary key attributes to uniquely identify the item for key-based operations.\n- For update operations, provide either the entire item document or only the attributes to be modified, depending on the API method and update strategy.\n- Maintain consistent attribute names and data types across operations to preserve schema integrity and prevent runtime errors.\n- Utilize attribute projections or partial documents when full item data is unnecessary to optimize performance, reduce latency, and minimize cost.\n- Handle nested structures carefully to ensure proper serialization and deserialization between application code and DynamoDB.\n\n**Examples**\n- `{ \"UserId\": \"12345\", \"Name\": \"John Doe\", \"Age\": 30, \"Preferences\": { \"Language\": \"en\", \"Notifications\": true } }`\n- `{ \"OrderId\": \"A100\", \"Date\": \"2024-06-01\", \"Items\":"},"updateExpression":{"type":"string","description":"A string that defines the specific attributes to be updated, added, or removed within a DynamoDB item using DynamoDB's UpdateExpression syntax. This expression provides precise control over how item attributes are modified by including one or more of the following clauses: SET (to assign or overwrite attribute values), REMOVE (to delete attributes), ADD (to increment numeric attributes or add elements to sets), and DELETE (to remove elements from sets). The updateExpression enables atomic, conditional, and efficient updates without replacing the entire item, ensuring data integrity and minimizing write costs. It must be carefully constructed using placeholders for attribute names and values—ExpressionAttributeNames and ExpressionAttributeValues—to avoid conflicts with reserved keywords and prevent injection vulnerabilities. This expression is essential for performing complex update operations in a single request, supporting both simple attribute changes and advanced manipulations like incrementing counters or modifying sets.\n\n**Field behavior**\n- Specifies one or more atomic update operations on a DynamoDB item's attributes.\n- Supports combining multiple update actions (SET, REMOVE, ADD, DELETE) within a single expression, separated by spaces.\n- Requires strict adherence to DynamoDB's UpdateExpression syntax and semantics.\n- Works in conjunction with ExpressionAttributeNames and ExpressionAttributeValues to safely reference attribute names and values.\n- Only affects the attributes explicitly mentioned; all other attributes remain unchanged.\n- Enables conditional updates when combined with ConditionExpression for safe concurrent modifications.\n\n**Implementation guidance**\n- Construct the updateExpression using valid DynamoDB syntax, incorporating clauses such as SET, REMOVE, ADD, and DELETE as needed.\n- Always use placeholders (e.g., '#attrName' for attribute names and ':value' for attribute values) to avoid conflicts with reserved words and enhance security.\n- Validate the expression syntax and ensure all referenced placeholders are defined before executing the update operation.\n- Combine multiple update actions by separating them with spaces within the same expression string.\n- Test expressions thoroughly to prevent runtime errors caused by syntax issues or missing placeholders.\n- Use SET for assigning or overwriting attribute values, REMOVE for deleting attributes, ADD for incrementing numbers or adding set elements, and DELETE for removing set elements.\n- Avoid using reserved keywords directly; always substitute them with ExpressionAttributeNames placeholders.\n\n**Examples**\n- \"SET #name = :newName, #age = :newAge REMOVE #obsoleteAttribute\"\n- \"ADD #score :incrementValue DELETE #tags :tagToRemove\"\n- \"SET #status = :statusValue\"\n- \"REMOVE #temporaryField"},"conditionExpression":{"type":"string","description":"A string that defines the conditional logic to be applied to a DynamoDB operation, specifying the precise criteria that must be met for the operation to proceed. This expression leverages DynamoDB's condition expression syntax to evaluate attributes of the targeted item, enabling fine-grained control over operations such as PutItem, UpdateItem, DeleteItem, and transactional writes. It supports a comprehensive set of logical operators (AND, OR, NOT), comparison operators (=, <, >, BETWEEN, IN), functions (attribute_exists, attribute_not_exists, begins_with, contains, size), and attribute path notations to construct complex, nested conditions. This ensures data integrity by preventing unintended modifications and enforcing business rules at the database level.\n\n**Field behavior**\n- Determines whether the DynamoDB operation executes based on the evaluation of the condition against the item's attributes.\n- Evaluated atomically before performing the operation; if the condition evaluates to false, the operation is aborted and no changes are made.\n- Supports combining multiple conditions using logical operators for complex, multi-attribute criteria.\n- Utilizes built-in functions to inspect attribute existence, value patterns, and sizes, enabling sophisticated conditional checks.\n- Applies exclusively to the specific item targeted by the operation, including nested attributes accessed via attribute path notation.\n- Influences transactional operations by ensuring all conditions in a transaction are met before committing.\n\n**Implementation guidance**\n- Always use ExpressionAttributeNames and ExpressionAttributeValues to safely reference attribute names and values, avoiding conflicts with reserved keywords and improving readability.\n- Validate the syntax of the condition expression prior to request submission to prevent runtime errors and failed operations.\n- Carefully construct expressions to handle edge cases, such as attributes that may be missing or have null values.\n- Combine multiple conditions logically to enforce precise constraints and business logic on the operation.\n- Test condition expressions thoroughly across diverse data scenarios to ensure expected behavior and robustness.\n- Keep expressions as simple and efficient as possible to optimize performance and reduce request complexity.\n\n**Examples**\n- \"attribute_exists(#name) AND #age >= :minAge\"\n- \"#status = :activeStatus\"\n- \"attribute_not_exists(#id)\"\n- \"#score BETWEEN :low AND :high\"\n- \"begins_with(#title, :prefix)\"\n- \"contains(#tags, :tagValue) OR size(#comments) > :minComments\"\n\n**Important notes**\n- The conditionExpression is optional but highly recommended to safeguard against unintended overwrites, deletions, or updates.\n- If the condition"},"expressionAttributeNames":{"type":"string","description":"expressionAttributeNames is a map of substitution tokens used to safely reference attribute names within DynamoDB expressions. This property allows you to define placeholder tokens for attribute names that might otherwise cause conflicts due to being reserved words, containing special characters, or having names that are dynamically generated. By using these placeholders, you can avoid syntax errors and ambiguities in expressions such as ProjectionExpression, FilterExpression, UpdateExpression, and ConditionExpression, ensuring your queries and updates are both valid and maintainable.\n\n**Field behavior**\n- Functions as a dictionary where each key is a placeholder token starting with a '#' character, and each value is the actual attribute name it represents.\n- Enables the use of reserved words, special characters, or otherwise problematic attribute names within DynamoDB expressions.\n- Supports complex expressions by allowing multiple attribute name substitutions within a single expression.\n- Prevents syntax errors and improves clarity by explicitly mapping placeholders to attribute names.\n- Applies only within the scope of the expression where it is defined, ensuring localized substitution without affecting other parts of the request.\n\n**Implementation guidance**\n- Always prefix keys with a '#' to clearly indicate they are substitution tokens.\n- Ensure each placeholder token is unique within the scope of the expression to avoid conflicts.\n- Use expressionAttributeNames whenever attribute names are reserved words, contain special characters, or are dynamically generated.\n- Combine with expressionAttributeValues when expressions require both attribute name and value substitutions.\n- Verify that the attribute names used as values exactly match those defined in your DynamoDB table schema to prevent runtime errors.\n- Keep the mapping concise and relevant to the attributes used in the expression to maintain readability.\n- Avoid overusing substitution tokens for attribute names that do not require it, to keep expressions straightforward.\n\n**Examples**\n- `{\"#N\": \"Name\", \"#A\": \"Age\"}` to substitute attribute names \"Name\" and \"Age\" in an expression.\n- `{\"#status\": \"Status\"}` when \"Status\" is a reserved word or needs escaping in an expression.\n- `{\"#yr\": \"Year\", \"#mn\": \"Month\"}` for expressions involving multiple date-related attributes.\n- `{\"#addr\": \"Address\", \"#zip\": \"ZipCode\"}` to handle attribute names with special characters or spaces.\n- `{\"#P\": \"Price\", \"#Q\": \"Quantity\"}` used in an UpdateExpression to increment numeric attributes.\n\n**Important notes**\n- Keys must always begin with the '#' character; otherwise, the substitution"},"expressionAttributeValues":{"type":"string","description":"expressionAttributeValues is a map of substitution tokens for attribute values used within DynamoDB expressions. These tokens serve as placeholders that safely inject values into expressions such as ConditionExpression, FilterExpression, UpdateExpression, and KeyConditionExpression. By separating data from code, this mechanism prevents injection attacks and syntax errors, ensuring secure and reliable query and update operations. Each key in this map is a placeholder string that begins with a colon (\":\"), and each corresponding value is a DynamoDB attribute value object representing the actual data to be used in the expression. This design enforces that user input is never directly embedded into expressions, enhancing both security and maintainability.\n\n**Field behavior**\n- Contains key-value pairs where keys are placeholders prefixed with a colon (e.g., \":val1\") used within DynamoDB expressions.\n- Each key maps to a DynamoDB attribute value object specifying the data type and value, supporting all DynamoDB data types including strings, numbers, binaries, lists, maps, sets, and booleans.\n- Enables safe and dynamic substitution of attribute values in expressions, avoiding direct string concatenation and reducing the risk of injection vulnerabilities.\n- Mandatory when expressions reference attribute values to ensure correct parsing and execution of queries or updates.\n- Placeholders are case-sensitive and must exactly match those used in the corresponding expressions.\n- Supports complex data structures, allowing nested maps and lists as attribute values for advanced querying scenarios.\n\n**Implementation guidance**\n- Always prefix placeholder keys with a colon (\":\") to clearly identify them as substitution tokens within expressions.\n- Ensure every placeholder referenced in an expression has a corresponding entry in expressionAttributeValues to avoid runtime errors.\n- Format each value according to DynamoDB’s attribute value specification, such as { \"S\": \"string\" }, { \"N\": \"123\" }, { \"BOOL\": true }, or complex nested structures.\n- Avoid using reserved words, spaces, or invalid characters in placeholder names to prevent syntax errors and conflicts.\n- Validate that the data types of values align with the expected attribute types defined in the table schema to maintain data integrity.\n- When multiple expressions are used simultaneously, maintain unique and consistent placeholder names to prevent collisions and ambiguity.\n- Use expressionAttributeValues in conjunction with expressionAttributeNames when attribute names are reserved words or contain special characters to ensure proper expression parsing.\n- Consider the size limitations of expressionAttributeValues to avoid exceeding DynamoDB’s request size limits.\n\n**Examples**\n- { \":val1\": { \"S\": \""},"ignoreExtract":{"type":"string","description":"Specifies whether the extraction process from DynamoDB data should be ignored or skipped during the operation, enabling conditional control over data retrieval to optimize performance, reduce latency, or avoid redundant processing in data workflows. This flag allows systems to bypass the extraction step when data is already available, cached, or deemed unnecessary for the current operation, thereby improving efficiency and resource utilization.  \n**Field** BEHAVIOR:  \n- When set to true, the extraction of data from DynamoDB is completely bypassed, preventing any data retrieval from the source.  \n- When set to false or omitted, the extraction process proceeds as normal, retrieving data for further processing.  \n- Primarily used to optimize workflows by skipping unnecessary extraction steps when data is already available or not needed.  \n- Influences the flow of data processing by controlling whether DynamoDB data is fetched or not.  \n- Affects downstream processing by potentially providing no new data input if extraction is skipped.  \n**Implementation guidance:**  \n- Accepts a boolean value: true to skip extraction, false to perform extraction.  \n- Ensure that downstream components can handle scenarios where no data is extracted due to this flag being true.  \n- Validate input to avoid unintended skipping of extraction that could cause data inconsistencies or stale results.  \n- Use this flag judiciously, especially in complex pipelines where data dependencies exist, to prevent breaking data integrity.  \n- Incorporate appropriate logging or monitoring to track when extraction is skipped for auditability and debugging.  \n**Examples:**  \n- ignoreExtract: true — extraction from DynamoDB is skipped, useful when data is cached or pre-fetched.  \n- ignoreExtract: false — extraction occurs normally, retrieving fresh data from DynamoDB.  \n- omit the property entirely — defaults to false, extraction proceeds as usual.  \n**Important notes:**  \n- Skipping extraction may result in incomplete, outdated, or stale data if subsequent operations rely on fresh DynamoDB data.  \n- This option should only be enabled when you are certain that extraction is unnecessary or redundant to avoid data inconsistencies.  \n- Consider the impact on data integrity, consistency, and downstream processing before enabling this flag.  \n- Misuse can lead to errors, unexpected behavior, or data quality issues in data workflows.  \n- Ensure that any caching or alternative data sources used in place of extraction are reliable and up-to-date.  \n**Dependency chain:**  \n- Requires a valid DynamoDB data source configuration to be relevant.  \n-"}}},"Http":{"type":"object","description":"Configuration for HTTP imports.\n\nIMPORTANT: When the _connectionId field points to a connection where the type is http, \nthis object MUST be populated for the import to function properly. This is a required configuration\nfor all HTTP based imports, as determined by the connection associated with the import.\n","properties":{"formType":{"type":"string","enum":["assistant","http","rest","graph_ql","assistant_graphql"],"description":"The form type to use for the import.\n- **assistant**: The import is for a system that we have a assistant form for.  This is the default value if there is a connector available for the system.\n- **http**: The import is an HTTP import.\n- **rest**: This value is only used for legacy imports that are not supported by the new HTTP framework.\n- **graph_ql**: The import is a GraphQL import.\n- **assistant_graphql**: The import is for a system that we have a assistant form for and utilizes GraphQL.  This is the default value if there is a connector available for the system and the connector supports GraphQL.\n"},"type":{"type":"string","enum":["file","records"],"description":"Specifies the type of data being imported via HTTP.\n\n- **file**: The import handles raw file content (binary data, PDF, images, etc.)\n- **records**: The import handles structured record data (JSON objects, XML records, etc.) - most common\n\nMost HTTP imports use \"records\" type for standard REST API data imports.\nOnly use \"file\" type when importing raw file content that will be processed downstream.\n"},"requestMediaType":{"type":"string","enum":["xml","json","csv","urlencoded","form-data","octet-stream","plaintext"],"description":"Specifies the media type (content type) of the request body sent to the target API.\n\n- **json**: For JSON request bodies (Content-Type: application/json) - most common\n- **xml**: For XML request bodies (Content-Type: application/xml)\n- **csv**: For CSV data (Content-Type: text/csv)\n- **urlencoded**: For URL-encoded form data (Content-Type: application/x-www-form-urlencoded)\n- **form-data**: For multipart form data\n- **octet-stream**: For binary data\n- **plaintext**: For plain text content\n"},"_httpConnectorEndpointIds":{"type":"array","items":{"type":"string","format":"objectId"},"description":"Array of HTTP connector endpoint IDs to use for this import.\nMultiple endpoints can be specified for different request types or operations.\n"},"blobFormat":{"type":"string","enum":["utf8","ucs2","utf-16le","ascii","binary","base64","hex"],"description":"Character encoding format for blob/binary data imports.\nOnly relevant when type is \"file\" or when handling binary content.\n"},"batchSize":{"type":"integer","description":"Maximum number of records to submit per HTTP request.\n\n- REST services typically use batchSize=1 (one record per request)\n- REST batch endpoints or RPC/XML services can handle multiple records\n- Affects throughput and API rate limiting\n\nDefault varies by API - consult target API documentation for optimal value.\n"},"successMediaType":{"type":"string","enum":["xml","json","plaintext"],"description":"Specifies the media type expected in successful responses.\n\n- **json**: For JSON responses (most common)\n- **xml**: For XML responses\n- **plaintext**: For plain text responses\n"},"requestType":{"type":"array","items":{"type":"string","enum":["CREATE","UPDATE"]},"description":"Specifies the type of operation(s) this import performs.\n\n- **CREATE**: Creating new records\n- **UPDATE**: Updating existing records\n\nFor composite/upsert imports, specify both operations. The array is positionally\naligned with relativeURI and method arrays — each index maps to one operation.\nThe existingExtract field determines which operation is used at runtime.\n\nExample composite upsert:\n```json\n\"requestType\": [\"UPDATE\", \"CREATE\"],\n\"method\": [\"PUT\", \"POST\"],\n\"relativeURI\": [\"/customers/{{{data.0.customerId}}}.json\", \"/customers.json\"],\n\"existingExtract\": \"customerId\"\n```\n"},"errorMediaType":{"type":"string","enum":["xml","json","plaintext"],"description":"Specifies the media type expected in error responses.\n\n- **json**: For JSON error responses (most common)\n- **xml**: For XML error responses\n- **plaintext**: For plain text error messages\n"},"relativeURI":{"type":"array","items":{"type":"string"},"description":"**CRITICAL: This MUST be an array of strings, NOT an object.**\n\n**Handlebars data context:** At runtime, `relativeURI` templates always render against the\n**pre-mapped** record — the original input record that arrives at this import step, before\nthe Import's mapping step is applied. This is intentional: URI construction usually needs\nbusiness identifiers that mappings may rename or remove.\n\nArray of relative URI path(s) for the import endpoint.\nCan include handlebars expressions like `{{field}}` for dynamic values.\n\n**Single-operation imports (one element)**\n```json\n\"relativeURI\": [\"/api/v1/customers\"]\n```\n\n**Composite/upsert imports (multiple elements — positionally aligned)**\nFor imports with both CREATE and UPDATE in requestType, relativeURI, method,\nand requestType arrays are **positionally aligned**. Each index corresponds\nto one operation. The existingExtract field determines which index is used at\nruntime: if the existingExtract field has a value → use the UPDATE index;\nif empty/missing → use the CREATE index.\n\nExample of a composite upsert:\n```json\n\"relativeURI\": [\"/customers/{{{data.0.shopifyCustomerId}}}.json\", \"/customers.json\"],\n\"method\": [\"PUT\", \"POST\"],\n\"requestType\": [\"UPDATE\", \"CREATE\"],\n\"existingExtract\": \"shopifyCustomerId\"\n```\nHere index 0 is the UPDATE operation (PUT to a specific customer) and index 1 is\nthe CREATE operation (POST to the collection endpoint).\n\n**IMPORTANT**: For composite imports, use the `{{{data.0.fieldName}}}` handlebars\nsyntax (triple braces with data.0 prefix) to reference the existing record ID in\nthe UPDATE URI. Do NOT use `{{#if}}` conditionals in a single URI — use separate\narray elements instead.\n\n**Wrong format (DO not do THIS)**\n```json\n\"relativeURI\": {\"/api/v1/customers\": \"CREATE\"}\n```\n```json\n\"relativeURI\": [\"{{#if id}}/customers/{{id}}.json{{else}}/customers.json{{/if}}\"]\n```\n"},"method":{"type":"array","items":{"type":"string","enum":["GET","PUT","POST","PATCH","DELETE"]},"description":"HTTP method(s) used for import requests.\n\n- **POST**: Most common for creating new records\n- **PUT**: For full updates/replacements\n- **PATCH**: For partial updates\n- **DELETE**: For deletions\n- **GET**: Rarely used for imports\n\nFor composite/upsert imports, specify multiple methods positionally aligned\nwith relativeURI and requestType arrays. Each index maps to one operation.\n\nExample: `[\"PUT\", \"POST\"]` with `requestType: [\"UPDATE\", \"CREATE\"]` means\nindex 0 uses PUT for updates, index 1 uses POST for creates.\n"},"_httpConnectorVersionId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector version being used."},"_httpConnectorResourceId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector resource being used."},"_httpConnectorEndpointId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector endpoint being used (single endpoint)."},"body":{"type":"array","items":{"type":"object"},"description":"**Important:** This field should be LEFT UNDEFINED for the vast majority of HTTP imports.\n\nThis is an OPTIONAL field that should only be set in very specific, rare cases. For standard REST API imports\nthat send JSON data, this field MUST be left undefined because data transformation is handled through the\nImport's mapping step, not through this body template.\n\n**When to leave this field undefined (MOST common CASE)**\n\nLeave this field undefined for ALL standard data imports, including:\n- REST API imports that send JSON records to create/update resources\n- APIs that accept JSON request bodies (the majority of modern APIs)\n- Any import where you want to use Import mappings to transform data\n- Standard CRUD operations (Create, Update, Delete) via REST APIs\n- GraphQL mutations that accept JSON input\n- Any import where the Import mapping step will handle data transformation\n\nExamples of imports that should have this field undefined:\n- \"Import customer data into Shopify\" → undefined (mappings handle the data transformation)\n- \"Create orders via custom REST API\" → undefined (mappings handle the data transformation)\n\n**KEY CONCEPT:** Imports have a dedicated \"Mapping\" step that transforms source data into the target\nformat. This is the standard, preferred approach. When the body field is set, the mapping step still runs\nfirst — the body template then renders against the **post-mapped** record instead of the destination API\nreceiving the mapped record as-is. This lets you post-process the mapped output into XML, SOAP envelopes,\nor other non-JSON shapes, but forces you to hand-template the entire request and is harder to maintain and debug.\n\n**When to set this field (RARE use CASE)**\n\nSet this field ONLY when:\n1. The target API requires XML or SOAP format (not JSON)\n2. You need a highly customized request structure that cannot be achieved through standard mappings\n3. You're working with a legacy API that requires very specific formatting\n4. The API documentation explicitly shows a complex XML/SOAP envelope structure\n5. When the user explicitly specifies a request body\n\nExamples of when to set body:\n- \"Import to SOAP web service\" → set body with XML/SOAP template\n- \"Send data to XML-only legacy API\" → set body with XML template\n- \"Complex SOAP envelope with multiple nested elements\" → set body with SOAP template\n- \"Create an import to /test with the body as {'key': 'value', 'key2': 'value2', ...}\"\n\n**Implementation details**\n\nWhen this field is undefined (default for most imports):\n- The system uses the Import's mapping step to transform data\n- Mappings provide a visual, intuitive way to map source fields to target fields\n- The mapped data is automatically sent as the request body\n- Easier to debug and maintain\n\nWhen this field is set:\n- The mapping step still runs — it produces a post-mapped record that is then fed into this body template\n- The template renders against the **post-mapped** record (use `{{record.<mappedField>}}` to reference mapping outputs)\n- You are responsible for shaping the final request body (XML/SOAP/custom JSON) by wrapping / restructuring the mapped data\n- Harder to maintain and debug than relying on mappings alone\n\n**Decision flowchart**\n\n1. Does the target API accept JSON request bodies?\n   → YES: Leave this field undefined (use mappings instead)\n2. Are you comfortable using the Import's mapping step?\n   → YES: Leave this field undefined\n3. Does the API require XML or SOAP format?\n   → YES: Set this field with appropriate XML/SOAP template\n4. Does the API require a highly unusual request structure that mappings can't handle?\n   → YES: Consider setting this field (but try mappings first)\n\nRemember: When in doubt, leave this field undefined. Almost all modern REST APIs work perfectly\nwith the standard mapping approach, and this is much easier to use and maintain.\n"},"existingExtract":{"type":"string","description":"Field name or JSON path used to determine if a record already exists in the destination,\nfor composite (upsert) imports that have both CREATE and UPDATE request types.\n\nWhen the import has requestType: [\"UPDATE\", \"CREATE\"] (or [\"CREATE\", \"UPDATE\"]):\n- If the field specified by existingExtract has a value in the incoming record,\n  the UPDATE operation (PUT) is used with the corresponding relativeURI and method.\n- If the field is empty or missing, the CREATE operation (POST) is used.\n\nThis field drives the upsert decision and must match a field that is populated by\nan upstream lookup or response mapping (e.g., a destination system ID like \"shopifyCustomerId\").\n\nIMPORTANT: Only set this field for composite/upsert imports that have BOTH CREATE and UPDATE\nin requestType. Do not set for single-operation imports.\n"},"ignoreExtract":{"type":"string","description":"JSON path to the field in the record that is used to determine if the record already exists.\nOnly used when ignoreMissing or ignoreExisting is set.\n"},"endPointBodyLimit":{"type":"integer","description":"Maximum size limit for the request body in bytes.\nUsed to enforce API-specific size constraints.\n"},"headers":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}},"description":"Custom HTTP headers to include with import requests.\n\nEach header has a name and value, which can include handlebars expressions.\n\nAt runtime, header value templates render against the **pre-mapped**\nrecord (the original input record, before the Import's mapping step).\nUse `{{record.<field>}}` to reference fields as they appear in the\nupstream source — mappings don't rename or drop fields for header\nevaluation.\n"},"response":{"type":"object","properties":{"resourcePath":{"type":"array","items":{"type":"string"},"description":"JSON path to the resource collection in the response.\nRequired when batchSize > 1.\n"},"resourceIdPath":{"type":"array","items":{"type":"string"},"description":"JSON path to the ID field in each resource.\nIf not specified, system looks for \"id\" or \"_id\" fields.\n"},"successPath":{"type":"array","items":{"type":"string"},"description":"JSON path to a field that indicates success.\nUsed for APIs that don't rely solely on HTTP status codes.\n"},"successValues":{"type":"array","items":{"type":"array","items":{"type":"string"}},"description":"Values at successPath that indicate success.\n"},"failPath":{"type":"array","items":{"type":"string"},"description":"JSON path to a field that indicates failure (even if status=200).\n"},"failValues":{"type":"array","items":{"type":"array","items":{"type":"string"}},"description":"Values at failPath that indicate failure.\n"},"errorPath":{"type":"string","description":"JSON path to error message in the response.\n"},"allowArrayforSuccessPath":{"type":"boolean","description":"Whether to allow array values for successPath evaluation.\n"},"hasHeader":{"type":"boolean","description":"Indicates if the first record in the response is a header row (for CSV responses).\n"}},"description":"Configuration for parsing and interpreting HTTP responses.\n"},"_asyncHelperId":{"type":"string","format":"objectId","description":"Reference to an AsyncHelper resource for handling asynchronous import operations.\n\nUsed when the target API uses a \"fire-and-check-back\" pattern (HTTP 202, polling, etc.).\n"},"ignoreLookupName":{"type":"string","description":"Name of the lookup to use for checking resource existence.\nOnly used when ignoreMissing or ignoreExisting is set.\n"}}},"Ftp":{"type":"object","description":"Configuration for Ftp exports","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"Identifier for the third-party connector utilized in the FTP integration, serving as a unique and immutable reference that links the FTP configuration to an external connector service. This identifier is essential for accurately associating the FTP connection with the intended third-party system, enabling seamless and secure data transfer, synchronization, and management through the specified connector. It ensures that all FTP operations are correctly routed and managed via the external service, supporting reliable integration workflows and maintaining system integrity throughout the connection lifecycle.\n\n**Field behavior**\n- Uniquely identifies the third-party connector associated with the FTP configuration.\n- Acts as a key reference to bind FTP settings to a specific external connector service.\n- Required when creating, updating, or managing FTP connections via third-party integrations.\n- Typically immutable after initial assignment to preserve connection consistency.\n- Used by system components to route FTP operations through the correct external connector.\n\n**Implementation guidance**\n- Should be a string or numeric identifier that aligns with the third-party connector’s identification scheme.\n- Must be validated against the system’s registry of available connectors to ensure validity.\n- Should be set securely and protected from unauthorized modification.\n- Updates to this identifier should be handled cautiously, with appropriate validation and impact assessment.\n- Ensure the identifier complies with any formatting or naming conventions imposed by the third-party service.\n- Transmit and store this identifier securely, using encryption where applicable.\n\n**Examples**\n- \"connector_12345\"\n- \"tpConnector-67890\"\n- \"ftpThirdPartyConnector01\"\n- \"extConnector_abcde\"\n- \"thirdPartyFTP_001\"\n\n**Important notes**\n- This field is critical for correctly mapping FTP configurations to their corresponding third-party services.\n- An incorrect or missing _tpConnectorId can lead to connection failures, data misrouting, or integration errors.\n- Synchronization between this identifier and the third-party connector records must be maintained to avoid inconsistencies.\n- Changes to this field may require re-authentication or reconfiguration of the FTP integration.\n- Proper error handling should be implemented to manage cases where the identifier does not match any registered connector.\n\n**Dependency chain**\n- Depends on the existence and registration of third-party connectors within the system.\n- Referenced by FTP connection management modules, authentication services, and integration workflows.\n- Interacts with authorization mechanisms to ensure secure access to third-party services.\n- May influence logging, monitoring, and auditing processes related to FTP integrations.\n\n**Technical details**\n- Data type: string (or"},"directoryPath":{"type":"string","description":"The path to the directory on the FTP server where files will be accessed, stored, or managed. This path can be specified either as an absolute path starting from the root directory of the FTP server or as a relative path based on the user's initial directory upon login, depending on the server's configuration and permissions. It is essential that the path adheres to the FTP server's directory structure, naming conventions, and access controls to ensure successful file operations such as uploading, downloading, listing, or deleting files. The directoryPath serves as the primary reference point for all file-related commands during an FTP session, determining the scope and context of file management activities.\n\n**Field behavior**\n- Defines the target directory location on the FTP server for all file-related operations.\n- Supports both absolute and relative path formats, interpreted according to the FTP server’s setup.\n- Must comply with the server’s directory hierarchy, naming rules, and access permissions.\n- Used by the FTP client to navigate and perform operations within the specified directory.\n- Influences the scope of accessible files and subdirectories during FTP sessions.\n- Changes to this path affect subsequent file operations until updated or reset.\n\n**Implementation guidance**\n- Validate the directory path format to ensure compatibility with the FTP server, typically using forward slashes (`/`) as separators.\n- Normalize the path to remove redundant elements such as `.` (current directory) and `..` (parent directory) to prevent errors and security vulnerabilities.\n- Handle scenarios where the specified directory does not exist by either creating it if permissions allow or returning a clear, descriptive error message.\n- Properly encode special characters and escape sequences in the path to conform with FTP protocol requirements and server expectations.\n- Provide informative and user-friendly error messages when the path is invalid, inaccessible, or lacks sufficient permissions.\n- Sanitize input to prevent directory traversal attacks or unauthorized access to restricted areas.\n- Consider server-specific behaviors such as case sensitivity, symbolic links, and virtual directories when interpreting the path.\n- Ensure that path changes are atomic and consistent to avoid race conditions during concurrent FTP operations.\n\n**Examples**\n- `/uploads/images`\n- `documents/reports/2024`\n- `/home/user/data`\n- `./backup`\n- `../shared/resources`\n- `/var/www/html`\n- `projects/current`\n\n**Important notes**\n- Access to the specified directory depends on the credentials used during FTP authentication.\n- Directory permissions directly affect the ability to read, write, or list files within the"},"fileName":{"type":"string","description":"The name of the file to be accessed, transferred, or manipulated via FTP (File Transfer Protocol). This property specifies the exact filename, including its extension, uniquely identifying the file within the FTP directory. It is critical for accurately targeting the file during operations such as upload, download, rename, or delete. The filename must be precise and comply with the naming conventions and restrictions of the FTP server’s underlying file system to avoid errors. Case sensitivity depends on the server’s operating system, and special attention should be given to character encoding, especially when handling non-ASCII or international characters. The filename should not include any directory path separators, as these are managed separately in directory or path properties.\n\n**Field behavior**\n- Identifies the specific file within the FTP server directory for various file operations.\n- Must include the file extension to ensure precise identification.\n- Case sensitivity is determined by the FTP server’s operating system.\n- Excludes directory or path information; only the filename itself is specified.\n- Used in conjunction with directory or path properties to locate the full file path.\n\n**Implementation guidance**\n- Validate that the filename contains only characters allowed by the FTP server’s file system.\n- Ensure the filename is a non-empty, well-formed string without path separators.\n- Normalize case if required to match server-specific case sensitivity rules.\n- Properly handle encoding for special or international characters to prevent transfer errors.\n- Combine with directory or path properties to construct the complete file location.\n- Avoid embedding directory paths or separators within the filename property.\n\n**Default behavior**\n- When the user does not specify a file naming convention, default to timestamped filenames using {{timestamp}} (e.g., \"items-{{timestamp}}.csv\"). This ensures each run produces a unique file and avoids overwriting previous exports.\n- Only use a fixed filename (without timestamp) if the user explicitly requests overwriting or a fixed name.\n\n**Examples**\n- \"report.pdf\"\n- \"image_2024.png\"\n- \"backup_2023_12_31.zip\"\n- \"data.csv\"\n- \"items-{{timestamp}}.csv\"\n\n**Important notes**\n- The filename alone may not be sufficient to locate the file without the corresponding directory or path.\n- Different FTP servers and operating systems may have varying rules regarding case sensitivity.\n- Path separators such as \"/\" or \"\\\" must not be included in the filename.\n- Adherence to FTP server naming conventions and restrictions is essential to prevent errors.\n- Be mindful of potential filename length limits imposed by the FTP server or protocol.\n\n**Dependency chain**\n- Depends on FTP directory or path properties to specify the full file location.\n- Utilized alongside FTP server credentials and connection settings.\n- May be influenced by file transfer mode (binary or ASCII) based on the file type.\n\n**Technical details**\n- Data type: string\n- Must conform to the FTP server’s"},"inProgressFileName":{"type":"string","description":"inProgressFileName is the designated temporary filename assigned to a file during the FTP upload process to clearly indicate that the file transfer is currently underway and not yet complete. This naming convention serves as a safeguard to prevent other systems or processes from prematurely accessing, processing, or locking the file before the upload finishes successfully. Typically, once the upload is finalized, the file is renamed to its intended permanent filename, signaling that it is ready for further use or processing. Utilizing an in-progress filename helps maintain data integrity, avoid conflicts, and streamline workflow automation in environments where files are continuously transferred and monitored. It plays a critical role in managing file state visibility, ensuring that incomplete or partially transferred files are not mistakenly processed, which could lead to data corruption or inconsistent system behavior.\n\n**Field behavior**\n- Acts as a temporary marker for files actively being uploaded via FTP or similar protocols.\n- Prevents other processes, systems, or automated workflows from accessing or processing the file until the upload is fully complete.\n- The file is renamed atomically from the inProgressFileName to the final intended filename upon successful upload completion.\n- Helps manage concurrency, avoid file corruption, partial reads, or race conditions during file transfer.\n- Indicates the current state of the file within the transfer lifecycle, providing clear visibility into upload progress.\n- Supports cleanup mechanisms by identifying orphaned or stale temporary files resulting from interrupted uploads.\n\n**Implementation guidance**\n- Choose a unique, consistent, and easily identifiable naming pattern distinct from final filenames to clearly denote the in-progress status.\n- Commonly use suffixes or prefixes such as \".inprogress\", \".tmp\", \"_uploading\", or similar conventions that are recognized across systems.\n- Ensure the FTP server and client support atomic renaming operations to avoid partial file visibility or race conditions.\n- Verify that the chosen temporary filename does not collide with existing files in the target directory to prevent overwriting or confusion.\n- Maintain consistent naming conventions across all related systems, scripts, and automation tools for clarity and maintainability.\n- Implement robust cleanup routines to detect and remove orphaned or stale in-progress files from failed or abandoned uploads.\n- Coordinate synchronization between upload completion and renaming to prevent premature processing or file locking issues.\n\n**Examples**\n- \"datafile.csv.inprogress\"\n- \"upload.tmp\"\n- \"report_20240615.tmp\"\n- \"file_upload_inprogress\"\n- \"transaction_12345.uploading\"\n\n**Important notes**\n- This filename is strictly temporary and should never be"},"backupDirectoryPath":{"type":"string","description":"backupDirectoryPath specifies the file system path to the directory where backup files are stored during FTP operations. This directory serves as the designated location for saving copies of files before they are overwritten or deleted, ensuring that original data can be recovered if necessary. The path can be either absolute or relative, depending on the system context, and must conform to the operating system’s file path conventions. It is essential that this path is valid, accessible, and writable by the FTP service or application performing the backups to guarantee reliable backup creation and data integrity. Proper configuration of this path is critical to prevent data loss and to maintain a secure and organized backup environment.  \n**Field behavior:**  \n- Specifies the target directory for storing backup copies of files involved in FTP transactions.  \n- Activated only when backup functionality is enabled within the FTP process.  \n- Requires the path to be valid, accessible, and writable to ensure successful backup creation.  \n- Supports both absolute and relative paths, with relative paths resolved against a defined base directory.  \n- If unset or empty, backup operations may be disabled or fallback to a default directory if configured.  \n**Implementation guidance:**  \n- Validate the existence of the directory and verify write permissions before initiating backups.  \n- Normalize paths to handle trailing slashes, redundant separators, and platform-specific path formats.  \n- Provide clear and actionable error messages if the directory is invalid, inaccessible, or lacks necessary permissions.  \n- Support environment variables or placeholders for dynamic path resolution where applicable.  \n- Enforce security best practices by restricting access to the backup directory to authorized users or processes only.  \n- Ensure sufficient disk space is available in the backup directory to accommodate backup files without interruption.  \n- Implement concurrency controls such as file locking or synchronization if multiple backup operations may occur simultaneously.  \n**Examples:**  \n- \"/var/ftp/backups\"  \n- \"C:\\\\ftp\\\\backup\"  \n- \"./backup_files\"  \n- \"/home/user/ftp_backup\"  \n**Important notes:**  \n- The backup directory must have adequate storage capacity to prevent backup failures due to insufficient space.  \n- Permissions must allow the FTP process or service to create and modify files within the directory.  \n- Backup files may contain sensitive or confidential data; appropriate security measures should be enforced to protect them.  \n- Path formatting should align with the conventions of the underlying operating system to avoid errors.  \n- Failure to properly configure this path can result in loss of backup data or inability to recover"}}},"Jdbc":{"type":"object","description":"Configuration for JDBC import operations. Defines how data is written to a database\nvia a JDBC connection.\n\n**Query type determines which fields are required**\n\n| queryType        | Required fields              | Do NOT set        |\n|------------------|------------------------------|-------------------|\n| [\"per_record\"]   | query (array of SQL strings) | bulkInsert        |\n| [\"per_page\"]     | query (array of SQL strings) | bulkInsert        |\n| [\"bulk_insert\"]  | bulkInsert object            | query             |\n| [\"bulk_load\"]    | bulkLoad object              | query             |\n\n**Critical:** query IS AN ARRAY OF STRINGS\nThe query field must be an array of plain strings, NOT a single string and NOT an array of objects.\nCorrect: [\"INSERT INTO users (name) VALUES ('{{name}}')\"]\nWrong: \"INSERT INTO users ...\"\nWrong: [{\"query\": \"INSERT INTO users ...\"}]\n","properties":{"query":{"type":"array","items":{"type":"string"},"description":"Array of SQL query strings to execute. REQUIRED when queryType is [\"per_record\"] or [\"per_page\"].\n\nEach element is a complete SQL statement as a plain string. Typically contains a single query.\n\nUse Handlebars {{fieldName}} syntax to inject values from incoming records.\n\n**Format — array of strings**\n- CORRECT: [\"INSERT INTO users (name, email) VALUES ('{{name}}', '{{email}}')\"]\n- WRONG: \"INSERT INTO users ...\"  (not an array)\n- WRONG: [{\"query\": \"INSERT INTO users ...\"}]  (objects are invalid — causes Cast error)\n\n**Examples**\n- INSERT: [\"INSERT INTO users (name, email) VALUES ('{{name}}', '{{email}}')\"]\n- UPDATE: [\"UPDATE inventory SET qty = {{qty}} WHERE sku = '{{sku}}'\"]\n- MERGE/UPSERT: [\"MERGE INTO target USING (SELECT CAST(? AS VARCHAR) AS email) AS src ON target.email = src.email WHEN MATCHED THEN UPDATE SET name = ? WHEN NOT MATCHED THEN INSERT (name, email) VALUES (?, ?)\"]\n\n**String vs numeric values in handlebars**\n- Strings: wrap in single quotes — '{{name}}'\n- Numbers: no quotes — {{quantity}}\n"},"queryType":{"type":"array","items":{"type":"string","enum":["bulk_insert","per_record","per_page","bulk_load"]},"description":"Execution strategy for the SQL operation. REQUIRED. Must be an array with one value.\n\n**Decision tree**\n\n1. If UPDATE or UPSERT/MERGE → [\"per_record\"] (set query field)\n2. If INSERT with \"ignore existing\" / \"skip duplicates\" / match logic → [\"per_record\"] (set query field)\n3. If pure INSERT with no duplicate checking → [\"bulk_insert\"] (set bulkInsert object)\n4. If high-volume bulk load → [\"bulk_load\"] (set bulkLoad object)\n\n**Critical relationship to other fields**\n| queryType        | REQUIRES              | DO NOT SET   |\n|------------------|-----------------------|--------------|\n| [\"per_record\"]   | query (array)         | bulkInsert   |\n| [\"per_page\"]     | query (array)         | bulkInsert   |\n| [\"bulk_insert\"]  | bulkInsert object     | query        |\n| [\"bulk_load\"]    | bulkLoad object       | query        |\n\n**Examples**\n- Per-record upsert: [\"per_record\"]\n- Bulk insert: [\"bulk_insert\"]\n- Bulk load: [\"bulk_load\"]\n"},"bulkInsert":{"type":"object","description":"Bulk insert configuration. REQUIRED when queryType is [\"bulk_insert\"]. DO NOT SET when queryType is [\"per_record\"].\n\nEnables efficient batch insertion of records into a database table without per-record SQL.\n","properties":{"tableName":{"type":"string","description":"Target database table name for bulk insert. REQUIRED.\nCan include schema qualifiers (e.g., \"schema.tableName\").\n"},"batchSize":{"type":"string","description":"Number of records per batch during bulk insert.\nLarger values improve throughput but use more memory.\nCommon values: \"1000\", \"5000\".\n"}}},"bulkLoad":{"type":"object","description":"Bulk load configuration. REQUIRED when queryType is [\"bulk_load\"]. Uses database-native bulk loading for maximum throughput.\n","properties":{"tableName":{"type":"string","description":"Target database table name for bulk load. REQUIRED.\nCan include schema qualifiers (e.g., \"schema.tableName\").\n"},"primaryKeys":{"type":["array","null"],"items":{"type":"string"},"description":"Primary key column names for upsert/merge during bulk load.\nWhen set, existing rows matching these keys are updated; non-matching rows are inserted.\nExample: [\"id\"] or [\"order_id\", \"product_id\"] for composite keys.\n"}}},"lookups":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string","description":"Lookup name, referenced in field mappings or ignore logic."},"query":{"type":"string","description":"SQL query for the lookup (e.g., \"SELECT id FROM users WHERE email = '{{email}}'\")."},"extract":{"type":"string","description":"Path in the lookup result to use as the value (e.g., \"id\")."},"_lookupCacheId":{"type":"string","format":"objectId","description":"Reference to a cached lookup for performance optimization."},"allowFailures":{"type":"boolean","description":"When true, lookup failures do not stop the import."},"map":{"type":"object","description":"Static value mapping for lookup results."},"default":{"type":"string","description":"Default value when lookup returns no results."}}},"description":"Lookup queries executed against the JDBC connection to resolve reference values.\nEach lookup runs a SQL query and extracts a value to use in field mappings.\n"}}},"Mongodb":{"type":"object","description":"Configuration for MongoDB imports. This schema defines ONLY the MongoDB-specific configuration properties.\n\n**IMPORTANT:** Properties like `ignoreExisting`, `ignoreMissing`, `name`, `description` are NOT part of this schema - they belong at a different level. When generating this configuration, return ONLY the properties defined here (method, collection, ignoreExtract, etc.).","properties":{"method":{"type":"string","enum":["insertMany","updateOne"],"description":"The HTTP method to be used for the MongoDB API request, specifying the type of operation to perform on the database resource. This property dictates how the server interprets and processes the request, directly influencing the action taken on MongoDB collections or documents. Common HTTP methods include GET for retrieving data, POST for creating new entries, PUT or PATCH for updating existing documents, and DELETE for removing records. Each method determines the structure of the request payload, the expected response format, and the side effects on the database. Proper selection of the method ensures alignment with RESTful principles, maintains idempotency where applicable, and supports secure and predictable interactions with the MongoDB service.\n\n**Field behavior**\n- Defines the intended CRUD operation (Create, Read, Update, Delete) on MongoDB resources.\n- Controls server-side processing logic, affecting data retrieval, insertion, modification, or deletion.\n- Influences the presence, format, and content of request bodies and the nature of responses returned.\n- Determines idempotency and safety characteristics of the API call, impacting retry and error handling strategies.\n- Affects authorization and authentication requirements based on the sensitivity of the operation.\n- Impacts caching behavior and network efficiency depending on the method used.\n\n**Implementation guidance**\n- Use standard HTTP methods consistent with RESTful API design: GET for reading data, POST for creating new documents, PUT or PATCH for updating existing documents, and DELETE for removing documents.\n- Validate the method against a predefined set of allowed HTTP methods to ensure API consistency and prevent unsupported operations.\n- Ensure the selected method accurately reflects the intended database operation to avoid unintended data modifications or errors.\n- Consider the idempotency of each method when designing client-side logic, especially for update and delete operations.\n- Include appropriate headers, query parameters, and request body content as required by the specific method and MongoDB operation.\n- Implement robust error handling and validation to manage unsupported or incorrect method usage gracefully.\n- Document method-specific requirements and constraints clearly for API consumers.\n\n**Examples**\n- GET: Retrieve one or more documents from a MongoDB collection without modifying data.\n- POST: Insert a new document into a collection, creating a new resource.\n- PUT: Replace an entire existing document with a new version, ensuring full update.\n- PATCH: Apply partial updates to specific fields within an existing document.\n- DELETE: Remove a document or multiple documents from a collection permanently.\n\n**Important notes**\n- The HTTP method must be compatible with the MongoDB operation"},"collection":{"type":"string","description":"Specifies the exact name of the MongoDB collection where data operations—including insertion, querying, updating, or deletion—will be executed. This property defines the precise target collection within the database, establishing the scope and context of the data affected by the API call. The collection name must strictly adhere to MongoDB's naming conventions to ensure compatibility, prevent conflicts with system-reserved collections, and maintain database integrity. Proper naming facilitates efficient data organization, indexing, query performance, and overall database scalability.\n\n**Field behavior**\n- Identifies the specific MongoDB collection for all CRUD (Create, Read, Update, Delete) operations.\n- Directly determines which subset of data is accessed, modified, or deleted during the API request.\n- Must be a valid, non-empty UTF-8 string that complies with MongoDB collection naming rules.\n- Case-sensitive, meaning collections named \"Users\" and \"users\" are distinct and separate.\n- Influences the application’s data flow and operational context by specifying the data container.\n- Changes to this property dynamically alter the target dataset for the API operation.\n\n**Implementation guidance**\n- Validate that the collection name is a non-empty UTF-8 string without null characters, spaces, or invalid symbols such as '$' or '.' except where allowed.\n- Ensure the name does not start with the reserved prefix \"system.\" to avoid conflicts with MongoDB internal collections.\n- Adopt consistent, descriptive, and meaningful naming conventions to enhance database organization, maintainability, and clarity.\n- Verify the existence of the collection before performing operations; implement robust error handling for cases where the collection does not exist.\n- Consider how the collection name impacts indexing strategies, query optimization, sharding, and overall database performance.\n- Maintain consistency in collection naming across the application to prevent unexpected behavior or data access issues.\n- Avoid overly long or complex names to reduce potential errors and improve readability.\n\n**Examples**\n- \"users\"\n- \"orders\"\n- \"inventory_items\"\n- \"logs_2024\"\n- \"customer_feedback\"\n- \"session_data\"\n- \"product_catalog\"\n\n**Important notes**\n- Collection names are case-sensitive and must be used consistently throughout the application to avoid data inconsistencies.\n- Avoid reserved names and prefixes such as \"system.\" to prevent interference with MongoDB’s internal collections and operations.\n- The choice of collection name can significantly affect database performance, indexing efficiency, and scalability.\n- Renaming a collection requires updating all references in the application and may necessitate data"},"filter":{"type":"string","description":"A MongoDB query filter object used to specify detailed criteria for selecting documents within a collection. This filter defines the exact conditions that documents must satisfy to be included in the query results, leveraging MongoDB's comprehensive and expressive query syntax. It supports a wide range of operators and expressions to enable precise and flexible querying of documents based on field values, existence, data types, ranges, nested properties, array contents, and more. The filter allows for complex logical combinations and can target deeply nested fields using dot notation, making it a powerful tool for fine-grained data retrieval.\n\n**Field behavior**\n- Determines which documents in the MongoDB collection are matched and returned by the query.\n- Supports complex query expressions including comparison operators (`$eq`, `$gt`, `$lt`, `$gte`, `$lte`, `$ne`), logical operators (`$and`, `$or`, `$not`, `$nor`), element operators (`$exists`, `$type`), array operators (`$in`, `$nin`, `$all`, `$elemMatch`), and evaluation operators (`$regex`, `$expr`).\n- Enables filtering based on simple fields, nested document properties using dot notation, and array contents.\n- Supports querying for null values, missing fields, and specific data types.\n- Allows combining multiple conditions using logical operators to form complex queries.\n- If omitted or provided as an empty object, the query defaults to returning all documents in the collection without filtering.\n- Can be used to optimize data retrieval by narrowing down the result set to only relevant documents.\n\n**Implementation guidance**\n- Construct the filter using valid MongoDB query operators and syntax to ensure correct behavior.\n- Validate and sanitize all input used to build the filter to prevent injection attacks or malformed queries.\n- Utilize dot notation to query nested fields within embedded documents or arrays.\n- Optimize query performance by aligning filters with indexed fields in the MongoDB collection.\n- Handle edge cases such as null values, missing fields, and data type variations appropriately within the filter.\n- Test filters thoroughly to ensure they return expected results and handle all relevant data scenarios.\n- Consider the impact of filter complexity on query execution time and resource usage.\n- Use projection and indexing strategies in conjunction with filters to improve query efficiency.\n- Be mindful of MongoDB version compatibility when using advanced operators or expressions.\n\n**Examples**\n- `{ \"status\": \"active\" }` — selects documents where the `status` field exactly matches \"active\".\n- `{ \"age\": { \"$gte\":"},"document":{"type":"string","description":"The main content or data structure representing a single record in a MongoDB database, encapsulated as a JSON-like object known as a document. This document consists of key-value pairs where keys are field names and values can be of various data types, including nested documents, arrays, strings, numbers, booleans, dates, and binary data, allowing for flexible and hierarchical data representation. It serves as the fundamental unit of data storage and retrieval within MongoDB collections and typically includes a unique identifier field (_id) that ensures each document can be distinctly accessed. Documents can contain metadata, support complex nested structures, and vary in schema within the same collection, reflecting MongoDB's schema-less design. This flexibility enables dynamic and evolving data models suited for diverse application needs, supporting efficient querying, indexing, and atomic operations at the document level.\n\n**Field behavior**\n- Represents an individual MongoDB document stored within a collection.\n- Contains key-value pairs with field names as keys and diverse data types as values, including nested documents and arrays.\n- Acts as the primary unit for data storage, retrieval, and manipulation in MongoDB.\n- Includes a mandatory unique identifier field (_id) unless auto-generated by MongoDB.\n- Supports flexible schema, allowing documents within the same collection to have different structures.\n- Can include metadata fields and support indexing for optimized queries.\n- Allows for embedding related data directly within the document to reduce the need for joins.\n- Supports atomic operations at the document level, ensuring consistency during updates.\n\n**Implementation guidance**\n- Ensure compliance with MongoDB BSON format constraints for data types and structure.\n- Validate field names to exclude reserved characters such as '.' and '$' unless explicitly permitted.\n- Support serialization and deserialization processes between application-level objects and MongoDB documents.\n- Enable handling of nested documents and arrays to represent complex data models effectively.\n- Consider indexing frequently queried fields within the document to enhance performance.\n- Monitor document size to stay within MongoDB’s 16MB limit to avoid performance degradation.\n- Use appropriate data types to optimize storage and query efficiency.\n- Implement validation rules or schema enforcement at the application or database level if needed to maintain data integrity.\n\n**Examples**\n- { \"_id\": ObjectId(\"507f1f77bcf86cd799439011\"), \"name\": \"Alice\", \"age\": 30, \"address\": { \"street\": \"123 Main St\", \"city\": \"Metropolis\" } }\n- { \"productId\": \""},"update":{"type":"string","description":"Update operation details specifying the precise modifications to be applied to one or more documents within a MongoDB collection. This field defines the exact changes using MongoDB's rich set of update operators, enabling atomic and efficient modifications to fields, arrays, or embedded documents without replacing entire documents unless explicitly intended. It supports a comprehensive range of update operations such as setting new values, incrementing numeric fields, removing fields, appending or removing elements in arrays, renaming fields, and more, providing fine-grained control over document mutations. Additionally, it accommodates advanced update mechanisms including pipeline-style updates introduced in MongoDB 4.2+, allowing for complex transformations and conditional updates within a single operation. The update specification can also leverage array filters and positional operators to target specific elements within arrays, enhancing the precision and flexibility of updates.\n\n**Field behavior**\n- Specifies the criteria and detailed modifications to apply to matching documents in a collection.\n- Supports a wide array of MongoDB update operators like `$set`, `$unset`, `$inc`, `$push`, `$pull`, `$addToSet`, `$rename`, `$mul`, `$min`, `$max`, and others.\n- Can target single or multiple documents depending on the operation context and options such as `multi` or `upsert`.\n- Ensures atomicity of update operations to maintain data integrity and consistency across concurrent operations.\n- Allows both partial updates using operators and full document replacements when the update object is a complete document without operators.\n- Supports pipeline-style updates for complex, multi-stage transformations and conditional logic within updates.\n- Enables the use of array filters and positional operators to selectively update elements within arrays based on specified conditions.\n- Handles upsert behavior to insert new documents if no existing documents match the update criteria, when enabled.\n\n**Implementation guidance**\n- Validate the update object rigorously to ensure compliance with MongoDB update syntax and semantics, preventing runtime errors.\n- Favor using update operators for partial updates to optimize performance and minimize unintended data overwrites.\n- Implement graceful handling for cases where no documents match the update criteria, optionally supporting upsert behavior to insert new documents if none exist.\n- Explicitly control whether updates affect single or multiple documents to avoid accidental mass modifications.\n- Incorporate robust error handling for invalid update operations, schema conflicts, or violations of database constraints.\n- Consider the impact of update operations on indexes, triggers, and other database mechanisms to maintain overall system performance and data integrity.\n- Support advanced update features such as array filters, positional operators"},"upsert":{"type":"boolean","description":"Upsert is a boolean flag that controls whether a MongoDB update operation should insert a new document if no existing document matches the specified query criteria, or strictly update existing documents only. When set to true, the operation performs an atomic \"update or insert\" action: it updates the matching document if found, or inserts a new document constructed from the update criteria if none exists. This ensures that after the operation, a document matching the criteria will exist in the collection. When set to false, the operation only updates documents that already exist and skips the operation entirely if no match is found, preventing any new documents from being created. This flag is essential for scenarios where maintaining data integrity and avoiding unintended document creation is critical.\n\n**Field behavior**\n- Determines whether the update operation can create a new document when no existing document matches the query.\n- If true, performs an atomic operation that either updates an existing document or inserts a new one.\n- If false, restricts the operation to updating existing documents only, with no insertion.\n- Influences the database state by enabling conditional document creation during update operations.\n- Affects the outcome of update queries by potentially expanding the dataset with new documents.\n\n**Implementation guidance**\n- Use upsert=true when you need to guarantee the existence of a document after the operation, regardless of prior presence.\n- Use upsert=false to restrict changes strictly to existing documents, avoiding unintended document creation.\n- Ensure that when upsert is true, the update document includes all required fields to create a valid new document.\n- Validate that the query filter and update document are logically consistent to prevent unexpected or malformed insertions.\n- Consider the impact on database size, indexing, and performance when enabling upsert, as it may increase document count.\n- Test the behavior in your specific MongoDB driver and version to confirm upsert semantics and compatibility.\n- Be mindful of potential race conditions in concurrent environments that could lead to duplicate documents if not handled properly.\n\n**Examples**\n- `upsert: true` — Update the matching document if found; otherwise, insert a new document based on the update criteria.\n- `upsert: false` — Update only if a matching document exists; skip the operation if no match is found.\n- Using upsert to create a user profile document if it does not exist during an update operation.\n- Employing upsert in configuration management to ensure default settings documents are present and updated as needed.\n\n**Important notes**\n- Upsert operations can increase"},"ignoreExtract":{"type":"string","description":"Enter the path to the field in the source record that should be used to identify existing records. If a value is found for this field, then the source record will be considered an existing record.\n\nThe dynamic field list makes it easy for you to select the field. If a field contains special characters (which may be the case for certain APIs), then the field is enclosed with square brackets [ ], for example, [field-name].\n\n[*] indicates that the specified field is an array, for example, items[*].id. In this case, you should replace * with a number corresponding to the array item. The value for only this array item (not, the entire array) is checked.\n**CRITICAL:** JSON path to the field in the incoming/source record that should be used to determine if the record already exists in the target system.\n\n**This field is ONLY used when `ignoreExisting` is set to true.** Note: `ignoreExisting` is a separate property that is NOT part of this mongodb configuration.\n\nWhen `ignoreExisting: true` is set (at the import level), this field specifies which field from the incoming data should be checked for a value. If that field has a value, the record is considered to already exist and will be skipped.\n\n**When to set this field**\n\n**ALWAYS set this field** when:\n1. The `ignoreExisting` property will be set to `true` (for \"ignore existing\" scenarios)\n2. The user's prompt mentions checking a specific field to identify existing records\n3. The prompt includes phrases like:\n   - \"indicated by the [field] field\"\n   - \"matching the [field] field\"\n   - \"based on [field]\"\n   - \"using the [field] field\"\n   - \"if [field] is populated\"\n   - \"where [field] exists\"\n\n**How to determine the value**\n\n1. **Look for the identifier field in the prompt:**\n   - \"ignoring existing customers indicated by the **id field**\" → `ignoreExtract: \"id\"`\n   - \"skip existing vendors based on the **email field**\" → `ignoreExtract: \"email\"`\n   - \"ignore records where **customerId** is populated\" → `ignoreExtract: \"customerId\"`\n\n2. **Use the field from the incoming/source data** (NOT the target system field):\n   - This is the field path in the data coming INTO the import\n   - Should match a field that will be present in the source records\n\n3. **Apply special character handling:**\n   - If a field contains special characters (hyphens, spaces, etc.), enclose with square brackets: `[field-name]`, `[customer id]`\n   - For array notation, use `[*]` to indicate array items: `items[*].id` (replace `*` with index if needed)\n\n**Field path syntax**\n\n- **Simple field:** `id`, `email`, `customerId`\n- **Nested field:** `customer.id`, `billing.email`\n- **Field with special chars:** `[customer-id]`, `[external_id]`, `[Customer Name]`\n- **Array field:** `items[*].id` (or `items[0].id` for specific index)\n- **Nested in array:** `addresses[*].postalCode`\n\n**Examples**\n\nThese examples show ONLY the mongodb configuration (not the full import structure):\n\n**Example 1: Simple field**\nPrompt: \"Create customers while ignoring existing customers indicated by the id field\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"customers\",\n  \"ignoreExtract\": \"id\"\n}\n```\n(Note: `ignoreExisting: true` should also be set, but at a different level - not shown here)\n\n**Example 2: Field with special characters**\nPrompt: \"Import vendors, skip existing ones based on the vendor-code field\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"vendors\",\n  \"ignoreExtract\": \"[vendor-code]\"\n}\n```\n\n**Example 3: Nested field**\nPrompt: \"Add accounts, ignore existing where customer.email matches\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"accounts\",\n  \"ignoreExtract\": \"customer.email\"\n}\n```\n\n**Example 4: No ignoreExisting scenario**\nPrompt: \"Create or update all customer records\"\n```json\n{\n  \"method\": \"updateOne\",\n  \"collection\": \"customers\"\n}\n```\n(ignoreExtract should NOT be set when ignoreExisting is not true)\n\n**Important notes**\n\n- **DO NOT SET** this field if `ignoreExisting` is not true\n- **`ignoreExisting` is NOT part of this mongodb schema** - it belongs at a different level\n- This specifies the SOURCE field, not the target MongoDB field\n- The field path is case-sensitive\n- Must be a valid field that exists in the incoming data\n- Works in conjunction with `ignoreExisting` - both must be set for this feature to work\n- If the specified field has a value in the incoming record, that record is considered existing and will be skipped\n\n**Common patterns**\n\n- \"ignore existing [records] indicated by the **[field]**\" → Set `ignoreExtract: \"[field]\"`\n- \"skip existing [records] based on **[field]**\" → Set `ignoreExtract: \"[field]\"`\n- \"if **[field]** is populated\" → Set `ignoreExtract: \"[field]\"`\n- No mention of specific field for identification → May need to use default like \"id\" or \"_id\""},"ignoreLookupFilter":{"type":"string","description":"If you are adding documents to your MongoDB instance and you have the Ignore Existing flag set to true please enter a filter object here to find existing documents in this collection. The value of this field must be a valid JSON string describing a MongoDB filter object in the correct format and with the correct operators. Refer to the MongoDB documentation for the list of valid query operators and the correct filter object syntax.\n**CRITICAL:** A JSON-stringified MongoDB query filter used to search for existing documents in the target collection.\n\nIt defines the criteria to match incoming records against existing MongoDB documents. If the query returns a result, the record is considered \"existing\" and the operation (usually insert) is skipped.\n\n**Critical distinction vs `ignoreExtract`**\n- Use **`ignoreExtract`** when you just want to check if a field exists on the **INCOMING** record (no DB query).\n- Use **`ignoreLookupFilter`** when you need to query the **MONGODB DATABASE** to see if a record exists (e.g., \"check if email exists in DB\").\n\n**When to set this field**\nSet this field when:\n1. Top-level `ignoreExisting` is `true`.\n2. The prompt implies checking the database for duplicates (e.g., \"skip if email already exists\", \"prevent duplicate SKUs\", \"match on email\").\n\n**Format requirements**\n- Must be a valid **JSON string** representing a MongoDB query.\n- Use Handlebars `{{FieldName}}` to reference values from the incoming record.\n- Format: `\"{ \\\"mongoField\\\": \\\"{{incomingField}}\\\" }\"`\n\n**Examples**\n\n**Example 1: Match on a single field**\nPrompt: \"Insert users, skip if email already exists in the database\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"users\",\n  \"ignoreLookupFilter\": \"{\\\"email\\\":\\\"{{email}}\\\"}\"\n}\n```\n\n**Example 2: Match on multiple fields**\nPrompt: \"Add products, ignore if SKU and VerifyID match existing products\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"products\",\n  \"ignoreLookupFilter\": \"{\\\"sku\\\":\\\"{{sku}}\\\", \\\"verify_id\\\":\\\"{{verify_id}}\\\"}\"\n}\n```\n\n**Example 3: Using operators**\nPrompt: \"Import orders, ignore if order_id matches\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"orders\",\n  \"ignoreLookupFilter\": \"{\\\"order_id\\\": { \\\"$eq\\\": \\\"{{id}}\\\" } }\"\n}\n```\n\n**Important notes**\n- The value MUST be a string (stringify the JSON).\n- Mongo field names (keys) must match the **collection schema**.\n- Handlebars variables (values) must match the **incoming data**."}}},"NetSuite-2":{"type":"object","description":"Configuration for NetSuite exports","properties":{"lookups":{"type":"object","properties":{"type":{"type":"string","description":"The type property specifies the category or classification of the lookup being referenced. It defines the nature or kind of data that the lookup represents, enabling the system to handle it appropriately based on its type.\n\n**Field behavior**\n- Determines the category or classification of the lookup.\n- Influences how the lookup data is processed and validated.\n- Helps in filtering or grouping lookups by their type.\n- Typically represented as a string or enumerated value indicating the lookup category.\n\n**Implementation guidance**\n- Define a clear set of allowed values or an enumeration for the type to ensure consistency.\n- Validate the type value against the predefined set during data input or API requests.\n- Use the type property to drive conditional logic or UI rendering related to the lookup.\n- Document all possible type values and their meanings for API consumers.\n\n**Examples**\n- \"country\" — representing a lookup of countries.\n- \"currency\" — representing a lookup of currency codes.\n- \"status\" — representing a lookup of status codes or states.\n- \"category\" — representing a lookup of item categories.\n\n**Important notes**\n- The type value should be consistent across the system to avoid ambiguity.\n- Changing the type of an existing lookup may affect dependent systems or data integrity.\n- The type property is essential for distinguishing between different lookup datasets.\n\n**Dependency chain**\n- May depend on or be referenced by other properties that require lookup data.\n- Used in conjunction with lookup values or keys to provide meaningful data.\n- Can influence validation rules or business logic applied to the lookup.\n\n**Technical details**\n- Typically implemented as a string or an enumerated type in the API schema.\n- Should be indexed or optimized for quick filtering and retrieval.\n- May be case-sensitive or case-insensitive depending on system design.\n- Should be included in API documentation with all valid values listed."},"searchField":{"type":"string","description":"searchField specifies the particular field or attribute within a dataset or index that the search operation should target. This property determines which field the search query will be applied to, enabling focused and efficient retrieval of relevant information.\n\n**Field behavior**\n- Defines the specific field in the data source to be searched.\n- Limits the search scope to the designated field, improving search precision.\n- Can accept field names as strings, corresponding to indexed or searchable attributes.\n- If omitted or null, the search may default to a predefined field or perform a broad search across multiple fields.\n\n**Implementation guidance**\n- Ensure the field name provided matches exactly with the field names defined in the data schema or index.\n- Validate that the field is searchable and supports the type of queries intended.\n- Consider supporting nested or compound field names if the data structure is hierarchical.\n- Provide clear error handling for invalid or unsupported field names.\n- Allow for case sensitivity or insensitivity based on the underlying search engine capabilities.\n\n**Examples**\n- \"title\" — to search within the title field of documents.\n- \"author.name\" — to search within a nested author name field.\n- \"description\" — to search within the description or summary field.\n- \"tags\" — to search within a list or array of tags associated with items.\n\n**Important notes**\n- The effectiveness of the search depends on the indexing and search capabilities of the specified field.\n- Using an incorrect or non-existent field name may result in no search results or errors.\n- Some fields may require special handling, such as tokenization or normalization, to support effective searching.\n- The property is case-sensitive if the underlying system treats field names as such.\n\n**Dependency chain**\n- Dependent on the data schema or index configuration defining searchable fields.\n- May interact with other search parameters such as search query, filters, or sorting.\n- Influences the search engine or database query construction.\n\n**Technical details**\n- Typically represented as a string matching the field identifier in the data source.\n- May support dot notation for nested fields (e.g., \"user.address.city\").\n- Should be sanitized to prevent injection attacks or malformed queries.\n- Used by the search backend to construct field-specific query clauses."},"expression":{"type":"string","description":"Expression used to define the criteria or logic for the lookup operation. This expression typically consists of a string that can include variables, operators, functions, or references to other data elements, enabling dynamic and flexible lookup conditions.\n\n**Field behavior**\n- Specifies the condition or formula that determines how the lookup is performed.\n- Can include variables, constants, operators, and functions to build complex expressions.\n- Evaluated at runtime to filter or select data based on the defined logic.\n- Supports dynamic referencing of other fields or parameters within the lookup context.\n\n**Implementation guidance**\n- Ensure the expression syntax is consistent with the supported expression language or parser.\n- Validate expressions before execution to prevent runtime errors.\n- Support common operators (e.g., ==, !=, >, <, AND, OR) and functions as per the system capabilities.\n- Allow expressions to reference other properties or variables within the lookup scope.\n- Provide clear error messages if the expression is invalid or cannot be evaluated.\n\n**Examples**\n- `\"status == 'active' AND score > 75\"`\n- `\"userRole == 'admin' OR accessLevel >= 5\"`\n- `\"contains(tags, 'urgent')\"`\n- `\"date >= '2024-01-01' AND date <= '2024-12-31'\"`\n\n**Important notes**\n- The expression must be a valid string conforming to the expected syntax.\n- Incorrect or malformed expressions may cause lookup failures or unexpected results.\n- Expressions should be designed to optimize performance, avoiding overly complex or resource-intensive logic.\n- The evaluation context (available variables and functions) should be clearly documented for users.\n\n**Dependency chain**\n- Depends on the lookup context providing necessary variables or data references.\n- May depend on the expression evaluation engine or parser integrated into the system.\n- Influences the outcome of the lookup operation and subsequent data processing.\n\n**Technical details**\n- Typically represented as a string data type.\n- Parsed and evaluated by an expression engine or interpreter at runtime.\n- May support standard expression languages such as SQL-like syntax, JSONPath, or custom domain-specific languages.\n- Should handle escaping and quoting of string literals within the expression."},"resultField":{"type":"string","description":"resultField specifies the name of the field in the lookup result that contains the desired value to be retrieved or used.\n\n**Field behavior**\n- Identifies the specific field within the lookup result data to extract.\n- Determines which piece of information from the lookup response is returned or processed.\n- Must correspond to a valid field name present in the lookup result structure.\n- Used to map or transform lookup data into the expected output format.\n\n**Implementation guidance**\n- Ensure the field name matches exactly with the field in the lookup result, including case sensitivity if applicable.\n- Validate that the specified field exists in all possible lookup responses to avoid runtime errors.\n- Use this property to customize which data from the lookup is utilized downstream.\n- Consider supporting nested field paths if the lookup result is a complex object.\n\n**Examples**\n- \"email\" — to extract the email address field from the lookup result.\n- \"userId\" — to retrieve the user identifier from the lookup data.\n- \"address.city\" — to access a nested city field within an address object in the lookup result.\n\n**Important notes**\n- Incorrect or misspelled field names will result in missing or null values.\n- The field must be present in the lookup response schema; otherwise, the lookup will not yield useful data.\n- This property does not perform any transformation; it only selects the field to be used.\n\n**Dependency chain**\n- Depends on the structure and schema of the lookup result data.\n- Used in conjunction with the lookup operation that fetches the data.\n- May influence downstream processing or mapping logic that consumes the lookup output.\n\n**Technical details**\n- Typically a string representing the key or path to the desired field in the lookup result object.\n- May support dot notation for nested fields if supported by the implementation.\n- Should be validated against the lookup result schema during configuration or runtime."},"includeInactive":{"type":"boolean","description":"IncludeInactive: Specifies whether to include inactive items in the lookup results.\n**Field behavior**\n- When set to true, the lookup results will contain both active and inactive items.\n- When set to false or omitted, only active items will be included in the results.\n- Affects the filtering logic applied during data retrieval for lookups.\n**Implementation guidance**\n- Default the value to false if not explicitly provided to avoid unintended inclusion of inactive items.\n- Ensure that the data source supports filtering by active/inactive status.\n- Validate the input to accept only boolean values.\n- Consider performance implications when including inactive items, especially if the dataset is large.\n**Examples**\n- includeInactive: true — returns all items regardless of their active status.\n- includeInactive: false — returns only items marked as active.\n**Important notes**\n- Including inactive items may expose deprecated or obsolete data; use with caution.\n- The definition of \"inactive\" depends on the underlying data model and should be clearly documented.\n- This property is typically used in administrative or audit contexts where full data visibility is required.\n**Dependency chain**\n- Dependent on the data source’s ability to distinguish active vs. inactive items.\n- May interact with other filtering or pagination parameters in the lookup API.\n**Technical details**\n- Data type: Boolean.\n- Default value: false.\n- Typically implemented as a query parameter or request body field in lookup API calls.\n- Requires backend support to filter data based on active status flags or timestamps."},"_id":{"type":"object","description":"_id: The unique identifier for the lookup entry within the system. This identifier is typically a string or an ObjectId that uniquely distinguishes each lookup record from others in the database or dataset.\n**Field behavior**\n- Serves as the primary key for the lookup entry.\n- Must be unique across all lookup entries.\n- Immutable once assigned; should not be changed to maintain data integrity.\n- Used to reference the lookup entry in queries, updates, and deletions.\n**Implementation guidance**\n- Use a consistent format for the identifier, such as a UUID or database-generated ObjectId.\n- Ensure uniqueness by leveraging database constraints or application-level checks.\n- Assign the _id at the time of creation of the lookup entry.\n- Avoid exposing internal _id values directly to end-users unless necessary.\n**Examples**\n- \"507f1f77bcf86cd799439011\"\n- \"123e4567-e89b-12d3-a456-426614174000\"\n- \"lookup_001\"\n**Important notes**\n- The _id field is critical for data integrity and should be handled with care.\n- Changing the _id after creation can lead to broken references and data inconsistencies.\n- In some systems, the _id may be automatically generated by the database.\n**Dependency chain**\n- May be referenced by other fields or entities that link to the lookup entry.\n- Dependent on the database or storage system's method of generating unique identifiers.\n**Technical details**\n- Typically stored as a string or a specialized ObjectId type depending on the database.\n- Indexed to optimize query performance.\n- Often immutable and enforced by database schema constraints."},"default":{"type":["object","null"],"description":"A boolean property indicating whether the lookup is the default selection among multiple lookup options.\n\n**Field behavior**\n- Determines if this lookup is automatically selected or used by default in the absence of user input.\n- Only one lookup should be marked as default within a given context to avoid ambiguity.\n- Influences the initial state or value presented to the user or system when multiple lookups are available.\n\n**Implementation guidance**\n- Set to `true` for the lookup that should be the default choice; all others should be `false` or omitted.\n- Validate that only one lookup per group or context has `default` set to `true`.\n- Use this property to pre-populate fields or guide user selections in UI or processing logic.\n\n**Examples**\n- `default: true` — This lookup is the default selection.\n- `default: false` — This lookup is not the default.\n- Omitted `default` property implies the lookup is not the default.\n\n**Important notes**\n- If multiple lookups have `default` set to `true`, the system behavior may be unpredictable.\n- The absence of a `default` lookup means no automatic selection; user input or additional logic is required.\n- This property is typically optional but recommended for clarity in multi-lookup scenarios.\n\n**Dependency chain**\n- Related to the `lookups` array or collection where multiple lookup options exist.\n- May influence UI components or backend logic that rely on default selections.\n\n**Technical details**\n- Data type: Boolean (`true` or `false`).\n- Default value: Typically `false` if omitted.\n- Should be included at the same level as other lookup properties within the `lookups` object."}},"description":"A comprehensive collection of lookup tables or reference data sets designed to support and enhance the main data model or application logic. These lookup tables provide predefined, authoritative sets of values that can be consistently referenced throughout the system to ensure data uniformity, minimize redundancy, and facilitate robust data validation and user interface consistency. They serve as centralized repositories for static or infrequently changing data that underpin dynamic application behavior, such as dropdown options, status codes, categorizations, and mappings. By centralizing this reference data, the system promotes maintainability, reduces errors, and enables seamless integration across different modules and services. Lookup tables may also support complex data structures, localization, versioning, and hierarchical relationships to accommodate diverse application requirements and evolving business rules.\n\n**Field behavior**\n- Contains multiple distinct lookup tables, each identified by a unique name representing a specific domain or category of reference data.\n- Each lookup table comprises structured entries, which may be simple key-value pairs or complex objects with multiple attributes, defining valid options, mappings, or metadata.\n- Lookup tables standardize inputs, support UI components like dropdowns and filters, enforce data integrity, and drive conditional logic across the application.\n- Typically static or updated infrequently, ensuring stability and consistency in dependent processes.\n- Supports localization or internationalization to provide values in multiple languages or regional formats.\n- May represent hierarchical or relational data structures within lookup tables to capture complex relationships.\n- Enables versioning to track changes over time and facilitate rollback, auditing, or backward compatibility.\n- Acts as a single source of truth for reference data, reducing duplication and discrepancies across systems.\n- Can be extended or customized to meet specific domain or organizational needs without impacting core application logic.\n\n**Implementation guidance**\n- Organize as a dictionary or map where each key corresponds to a lookup table name and the associated value contains the set of entries.\n- Implement efficient loading, caching, and retrieval mechanisms to optimize performance and minimize latency.\n- Provide controlled update or refresh capabilities to allow seamless modifications without disrupting ongoing application operations.\n- Enforce strict validation rules on lookup entries to prevent invalid, inconsistent, or duplicate data.\n- Incorporate support for localization and internationalization where applicable, enabling multi-language or region-specific values.\n- Maintain versioning or metadata to track changes and support backward compatibility.\n- Consider access control policies if lookup data includes sensitive or restricted information.\n- Design APIs or interfaces for easy querying, filtering, and retrieval of lookup data by client applications.\n- Ensure synchronization mechanisms are in place"},"operation":{"type":"string","enum":["add","update","addupdate","attach","detach","delete"],"description":"Specifies the type of operation to perform on NetSuite records. This is a plain string value, NOT an object.\n\n**Valid values**\n- \"add\" — Create a new record. Use when importing new records only.\n- \"update\" — Update an existing record. Requires internalIdLookup to find the record.\n- \"addupdate\" — Upsert: update if found, create if not. Requires internalIdLookup. This is the most common operation.\n- \"attach\" — Attach a record to another record.\n- \"detach\" — Detach a record from another record.\n- \"delete\" — Delete a record. Requires internalIdLookup to find the record.\n\n**Implementation guidance**\n- Value is automatically lowercased by the API.\n- For \"addupdate\", \"update\", and \"delete\", you MUST also set internalIdLookup to specify how to find existing records.\n- For \"add\" with ignoreExisting, set internalIdLookup to check for duplicates before creating.\n- Default to \"addupdate\" when the user says \"sync\", \"upsert\", or \"create or update\".\n- Use \"add\" when the user says \"create\" or \"insert\" without mentioning updates.\n\n**Examples**\n- \"addupdate\" — for syncing records (create or update)\n- \"add\" — for creating new records only\n- \"update\" — for updating existing records only\n- \"delete\" — for removing records\n\n**Important notes**\n- This field is a string, NOT an object. Set it directly: \"operation\": \"addupdate\"\n- Do NOT wrap it in an object like {\"type\": \"addupdate\"} — that will cause a validation error."},"isFileProvider":{"type":"boolean","description":"isFileProvider indicates whether the NetSuite integration functions as a file provider, enabling comprehensive capabilities to access, manage, and manipulate files within the NetSuite environment. When enabled, this property grants the integration full file-related operations such as browsing directories, uploading new files, downloading existing files, updating file metadata, and deleting files directly through the integration interface. This functionality is essential for workflows that require seamless, automated interaction with NetSuite's file storage system, facilitating efficient document handling, version control, and integration with external file management processes. It also impacts the availability of specific API endpoints and user interface components related to file management, and may influence system performance and API rate limits due to the volume of file operations.\n\n**Field behavior**\n- When set to true, activates full file management features including browsing, uploading, downloading, updating, and deleting files within NetSuite.\n- When false or omitted, disables all file provider functionalities, preventing any file-related operations through the integration.\n- Acts as a toggle controlling the availability of file-centric capabilities and related API endpoints.\n- Changes typically require re-authentication or reinitialization to apply updated file access permissions correctly.\n- May affect integration performance and API rate limits due to potentially high file operation volumes.\n\n**Implementation guidance**\n- Enable only if direct, comprehensive interaction with NetSuite files is required.\n- Ensure the integration has appropriate NetSuite permissions, roles, and OAuth scopes for secure file access and manipulation.\n- Validate strictly as a boolean to prevent misconfiguration.\n- Assess security implications thoroughly, enforcing strict access controls and audit logging when enabled.\n- Monitor API usage and system performance closely, as file operations can increase load and impact rate limits.\n- Coordinate with NetSuite administrators to align file access policies with organizational security and compliance standards.\n- Consider effects on middleware and downstream systems handling file transfers, error handling, and synchronization.\n\n**Examples**\n- `isFileProvider: true` — Enables full file provider capabilities, allowing browsing, uploading, downloading, and managing files within NetSuite.\n- `isFileProvider: false` — Disables all file-related functionalities, restricting the integration to non-file operations only.\n\n**Important notes**\n- Enabling file provider functionality may require additional NetSuite OAuth scopes, roles, or permissions specifically related to file access.\n- File operations can significantly impact API rate limits and quotas; plan and monitor usage accordingly.\n- This property exclusively controls file management capabilities and does not affect other integration functionalities."},"customFieldMetadata":{"type":"object","description":"Metadata information related to custom fields defined within the NetSuite environment, offering a comprehensive and detailed overview of each custom field's attributes, configuration settings, and operational parameters. This includes critical details such as the field type (e.g., checkbox, select, date, text), label, internal ID, source lists or records for select-type fields, default values, validation rules, display properties, and behavioral flags like mandatory, read-only, or hidden status. The metadata acts as an authoritative reference for accurately managing, validating, and utilizing custom fields during integrations, data processing, and API interactions, ensuring that custom field data is interpreted and handled precisely according to its defined structure, constraints, and business logic. It also enables dynamic adaptation to changes in custom field configurations, supporting robust and flexible integration workflows that maintain data integrity and enforce business rules effectively.\n\n**Field behavior**\n- Provides exhaustive metadata for each custom field, including type, label, internal ID, source references, default values, validation criteria, and display properties.\n- Reflects the current and authoritative configuration and constraints of custom fields within the NetSuite account.\n- Enables dynamic validation, processing, and rendering of custom fields during data import, export, and API operations.\n- Supports understanding of field dependencies, mandatory status, read-only or hidden flags, and other behavioral characteristics.\n- Facilitates enforcement of business rules and data integrity through validation and conditional logic based on metadata.\n\n**Implementation guidance**\n- Ensure this property is consistently populated with accurate, up-to-date, and comprehensive metadata for all relevant custom fields in the integration context.\n- Regularly synchronize and refresh metadata with the NetSuite environment to capture any additions, modifications, or deletions of custom fields.\n- Leverage this metadata to validate data payloads before transmission and to correctly interpret and process received data.\n- Use metadata details to implement field-specific logic, such as enforcing mandatory fields, applying validation rules, handling default values, and respecting read-only or hidden statuses.\n- Consider caching metadata where appropriate to optimize performance, while ensuring mechanisms exist to detect and apply updates promptly.\n\n**Examples**\n- Field type: \"checkbox\", label: \"Is Active\", id: \"custentity_is_active\", mandatory: true\n- Field type: \"select\", label: \"Customer Type\", id: \"custentity_customer_type\", sourceList: \"customerTypes\", readOnly: false\n- Field type: \"date\", label: \"Contract Start Date\", id: \"custentity_contract_start"},"recordType":{"type":"string","description":"recordType specifies the exact type of record within the NetSuite system that the API operation will interact with. This property is critical as it defines the schema, fields, validation rules, workflows, and behaviors applicable to the record being accessed, created, updated, or deleted. By accurately specifying the recordType, the API can correctly interpret the structure of the data payload, enforce appropriate business logic, and route the request to the correct processing module within NetSuite. This ensures precise data manipulation, retrieval, and integration aligned with the intended record context. Proper use of recordType enables seamless interaction with both standard and custom NetSuite records, supporting robust and flexible API operations across diverse business scenarios.\n\n**Field behavior**\n- Defines the specific NetSuite record type for the API request (e.g., customer, salesOrder, invoice).\n- Directly influences validation rules, available fields, permissible operations, and workflows in the request or response.\n- Determines the context, permissions, and business logic applicable to the record.\n- Must be set accurately and consistently to ensure correct processing and avoid errors.\n- Affects how the API interprets, validates, and processes the associated data payload.\n- Supports interaction with both standard and custom record types configured in the NetSuite account.\n- Enables the API to dynamically adapt to different record structures and business processes based on the recordType.\n\n**Implementation guidance**\n- Use the exact NetSuite internal record type identifiers as defined in official NetSuite documentation and account-specific customizations.\n- Validate the recordType value against the list of supported record types before processing the request to prevent errors.\n- Ensure compatibility of the recordType with the specific API endpoint, operation, and NetSuite account configuration.\n- Implement robust error handling for missing, invalid, or unsupported recordType values, providing clear and actionable feedback.\n- Regularly update the recordType list to reflect changes in NetSuite releases, custom record definitions, and account-specific configurations.\n- Consider role-based access and feature enablement that may affect the availability or behavior of certain record types.\n- When dealing with custom records, verify that the recordType matches the custom record’s internal ID and that the API user has appropriate permissions.\n\n**Examples**\n- \"customer\"\n- \"salesOrder\"\n- \"invoice\"\n- \"employee\"\n- \"itemFulfillment\"\n- \"vendorBill\"\n- \"purchaseOrder\"\n- \"customRecordType123\" (example of a custom record)\n\n**Important notes**\n- The recordType value is case"},"recordTypeId":{"type":"string","description":"recordTypeId is the unique identifier that specifies the exact type or category of a record within the NetSuite system. It defines the structure, fields, validation rules, workflows, and behavior applicable to the record instance, ensuring that API operations target the correct record schema and processing logic. This identifier is essential for routing requests to the appropriate record handlers, enforcing data integrity, and validating the data according to the specific record type's requirements. Once set for a record instance, the recordTypeId is immutable, as changing it would imply a fundamentally different record type and could lead to inconsistencies or errors in processing.\n\n**Field behavior**\n- Specifies the precise category or type of NetSuite record being accessed or manipulated.\n- Determines the applicable schema, fields, validation rules, workflows, and business logic for the record.\n- Directs API requests to the correct processing logic, endpoint, and validation routines based on record type.\n- Immutable for a given record instance; changing it indicates a different record type and is not permitted.\n- Influences permissions, access control, and integration behaviors tied to the record type.\n- Affects how related records and dependencies are handled within the system.\n\n**Implementation guidance**\n- Use the exact internal string ID or numeric identifier as defined by NetSuite for record types.\n- Validate the recordTypeId against the current list of supported, enabled, and custom NetSuite record types before processing.\n- Ensure consistency and compatibility between recordTypeId and related fields or data structures within the payload.\n- Implement robust error handling for invalid, unsupported, deprecated, or mismatched recordTypeId values.\n- Keep the recordTypeId definitions synchronized with updates, customizations, and configurations from the NetSuite account to avoid mismatches.\n- Consider case sensitivity and formatting rules as dictated by the NetSuite API specifications.\n- When integrating with custom record types, verify that custom IDs conform to naming conventions and are properly registered in the NetSuite environment.\n\n**Examples**\n- \"customer\" — representing a standard customer record type.\n- \"salesOrder\" — representing a sales order record type.\n- \"invoice\" — representing an invoice record type.\n- Numeric identifiers such as \"123\" if the system uses numeric codes for record types.\n- Custom record type IDs defined within a specific NetSuite account, e.g., \"customRecordType_456\".\n- \"employee\" — representing an employee record type.\n- \"purchaseOrder\" — representing a purchase order record type.\n\n**Important notes**"},"retryUpdateAsAdd":{"type":"boolean","description":"A boolean flag that controls whether failed update operations in NetSuite integrations should be automatically retried as add operations. When set to true, if an update attempt fails—commonly because the target record does not exist—the system will attempt to add the record instead. This mechanism helps maintain data synchronization by addressing scenarios where records may have been deleted, never created, or are otherwise missing, thereby reducing manual intervention and minimizing data discrepancies. It ensures smoother integration workflows by providing a fallback strategy specifically for update failures, enhancing resilience and data consistency.\n\n**Field behavior**\n- Governs retry logic exclusively for update operations within NetSuite integrations.\n- When true, triggers an automatic retry as an add operation upon update failure.\n- When false or unset, update failures result in immediate error responses without retries.\n- Facilitates handling of missing or deleted records to improve synchronization accuracy.\n- Does not influence initial add operations or other API call types.\n- Activates retry logic only after detecting a failed update attempt.\n\n**Implementation guidance**\n- Enable this flag when there is a possibility that update requests target non-existent records.\n- Evaluate the potential for duplicate record creation if the add operation is not idempotent.\n- Implement comprehensive error handling and logging to monitor retry attempts and outcomes.\n- Conduct thorough testing in staging or development environments before deploying to production.\n- Coordinate with other retry or error recovery configurations to avoid conflicting behaviors.\n- Ensure accurate detection of update failure causes to apply retries appropriately and avoid unintended consequences.\n\n**Examples**\n- `retryUpdateAsAdd: true` — Automatically retries adding the record if an update fails due to a missing record.\n- `retryUpdateAsAdd: false` — Update failures are reported immediately without retrying as add operations.\n\n**Important notes**\n- Enabling this flag may lead to duplicate records if update failures occur for reasons other than missing records.\n- Use with caution in environments requiring strict data integrity, auditability, and compliance.\n- This flag only modifies retry behavior after an update failure; it does not alter the initial add or update request logic.\n- Accurate failure cause analysis is critical to ensure retries are applied correctly and safely.\n\n**Dependency chain**\n- Relies on execution of update operations against NetSuite records.\n- Works in conjunction with error detection and handling mechanisms to identify failure reasons.\n- Commonly used alongside other retry policies or error recovery settings to enhance integration robustness.\n\n**Technical details**\n- Data type: boolean\n- Default value: false"},"batchSize":{"type":"number","description":"Specifies the number of records to process in a single batch operation when interacting with the NetSuite API, directly influencing the efficiency, performance, and resource utilization of data processing tasks. This parameter controls how many records are sent or retrieved in one API call, balancing throughput with system constraints such as memory usage, network latency, and API rate limits. Proper configuration of batchSize is essential to optimize processing speed while minimizing the risk of errors, timeouts, or throttling by the NetSuite API. Adjusting batchSize allows for fine-tuning of integration workflows to accommodate varying workloads, system capabilities, and API restrictions, ensuring reliable and efficient data synchronization.  \n**Field behavior:**  \n- Determines the count of records included in each batch request to the NetSuite API.  \n- Directly impacts the number of API calls required to complete processing of all records.  \n- Larger batch sizes can improve throughput by reducing the number of calls but may increase memory consumption, processing latency, and risk of timeouts per call.  \n- Smaller batch sizes reduce memory footprint and improve responsiveness but may increase the total number of API calls and associated overhead.  \n- Influences error handling complexity, as failures in larger batches may require more extensive retries or partial processing logic.  \n- Affects how quickly data is processed and synchronized between systems.  \n**Implementation** GUIDANCE:  \n- Select a batch size that balances optimal performance with available system resources, network conditions, and API constraints.  \n- Conduct thorough performance testing with varying batch sizes to identify the most efficient configuration for your specific workload and environment.  \n- Ensure the batch size complies with NetSuite API limits, including maximum records per request and rate limiting policies.  \n- Implement robust error handling to manage partial failures within batches, including retry mechanisms, logging, and potential batch splitting.  \n- Consider the complexity, size, and processing time of individual records when determining batch size, as more complex or larger records may require smaller batches.  \n- Monitor runtime metrics and adjust batchSize dynamically if possible, based on system load, API response times, and error rates.  \n**Examples:**  \n- 100 (process 100 records per batch for balanced throughput and resource use)  \n- 500 (process 500 records per batch to maximize throughput in high-capacity environments)  \n- 50 (smaller batch size suitable for environments with limited memory, stricter API limits, or higher error sensitivity)  \n**Important notes:**  \n- The batchSize setting"},"internalIdLookup":{"type":"object","properties":{"extract":{"type":"string","description":"Extract the relevant internal ID from the provided NetSuite data response to uniquely identify and reference specific records for subsequent API operations. This involves accurately parsing the response data—whether in JSON, XML, or other formats—to isolate the internal ID, which serves as a critical key for actions such as updates, deletions, or detailed queries within NetSuite. The extraction process must be precise, resilient to variations in response structures, and capable of handling nested or complex data formats to ensure reliable downstream processing and prevent errors in record manipulation.  \n**Field behavior:**  \n- Extracts a unique internal ID value from NetSuite API responses or data objects.  \n- Acts as the primary identifier for referencing and manipulating NetSuite records.  \n- Typically invoked after lookup, search, or query operations to obtain the necessary ID for further API calls.  \n- Supports handling of multiple data formats including JSON, XML, and potentially others.  \n- Ensures the extracted ID is valid, correctly formatted, and usable for subsequent operations.  \n- Handles nested or complex response structures to reliably locate the internal ID.  \n- Operates as a read-only field derived exclusively from API response data.  \n\n**Implementation guidance:**  \n- Implement robust and flexible parsing logic tailored to the expected NetSuite response formats.  \n- Validate the extracted internal ID against expected patterns (e.g., numeric strings) to ensure correctness.  \n- Incorporate comprehensive error handling for cases where the internal ID is missing, malformed, or unexpected.  \n- Design the extraction mechanism to be configurable and adaptable to changes in NetSuite API response schemas or record types.  \n- Support asynchronous processing if API responses are received asynchronously or in streaming formats.  \n- Log extraction attempts and failures to facilitate debugging and maintenance.  \n- Regularly update extraction logic to accommodate API version changes or schema updates.  \n\n**Examples:**  \n- Extracting `\"internalId\": \"12345\"` from a JSON response such as `{ \"internalId\": \"12345\", \"name\": \"Sample Record\" }`.  \n- Parsing nested XML elements or attributes to retrieve the internalId value, e.g., `<record internalId=\"12345\">`.  \n- Using the extracted internal ID to perform update, delete, or detailed fetch operations on a NetSuite record.  \n- Handling cases where the internal ID is embedded within deeply nested or complex response objects.  \n\n**Important notes:**  \n- The internal ID uniquely identifies records within NetSuite and is essential for accurate and reliable API"},"searchField":{"type":"string","description":"The specific field within a NetSuite record that serves as the key attribute for performing an internal ID lookup search. This field determines which property of the record will be queried to locate and retrieve the corresponding internal ID, thereby directly influencing the precision, relevance, and efficiency of the lookup operation. Selecting an appropriate searchField is critical for ensuring accurate matches and optimal performance, as it typically should be a unique, indexed, or otherwise optimized attribute within the record schema. The choice of searchField impacts not only the speed of the search but also the uniqueness and reliability of the results returned, making it essential to align with the record type, user permissions, and any customizations present in the NetSuite environment. Proper selection of this field helps avoid ambiguous results and enhances the overall robustness of the lookup process.\n\n**Field behavior**\n- Identifies the exact attribute of the NetSuite record to be searched.\n- Directs the lookup process to compare the provided search value against this field.\n- Affects the accuracy, relevance, and uniqueness of the internal ID retrieved.\n- Commonly corresponds to fields that are indexed or uniquely constrained for efficient querying.\n- Determines the scope and filtering of the search within the record type.\n- Influences how the system handles multiple matches or ambiguous results.\n- Must correspond to a searchable field that the user has permission to access.\n\n**Implementation guidance**\n- Use only valid, supported field names as defined in the NetSuite record schema.\n- Prefer fields that are indexed or optimized for search to enhance lookup speed.\n- Verify the existence and searchability of the field on the specific record type being queried.\n- Choose fields that minimize duplicates to avoid ambiguous or multiple results.\n- Ensure exact case sensitivity and spelling alignment with NetSuite’s API definitions.\n- Test the field’s searchability considering user permissions, record configurations, and any customizations.\n- Avoid fields that are write-only, restricted, or non-searchable due to NetSuite permissions or settings.\n- Consider the impact of custom fields or scripts that may alter standard field behavior.\n- Validate that the field supports the type of search operation intended (e.g., exact match, partial match).\n\n**Examples**\n- \"externalId\" — to locate records by an externally assigned identifier.\n- \"email\" — to find records using an email address attribute.\n- \"name\" — to search by the record’s name or title field.\n- \"tranId\" — to query transaction records by their transaction ID.\n- \"entityId"},"expression":{"type":"string","description":"A string representing a search expression used to filter or query records within the NetSuite system based on specific criteria. This expression defines the precise conditions that records must meet to be included in the search results, enabling targeted and efficient data retrieval. It supports a rich syntax including logical operators (AND, OR, NOT), comparison operators (=, !=, >, <, >=, <=), field references, and literal values. Expressions can be nested to form complex queries tailored to specific business needs, allowing for granular control over the search logic. This property is essential for performing internal lookups by matching records against the defined criteria, ensuring that only relevant records are returned.\n\n**Field behavior**\n- Defines the criteria for searching or filtering records by specifying one or more conditions.\n- Supports logical operators such as AND, OR, and NOT to combine multiple conditions logically.\n- Allows inclusion of field names, comparison operators, and literal values to construct meaningful expressions.\n- Enables nested expressions for advanced querying capabilities.\n- Used internally to perform lookups by matching records against the defined expression.\n- Returns records that satisfy all specified conditions within the expression.\n- Evaluates expressions dynamically at runtime to reflect current data states.\n- Supports referencing related record fields to enable cross-record filtering.\n\n**Implementation guidance**\n- Ensure the expression syntax strictly conforms to NetSuite’s search expression format and supported operators.\n- Validate the expression prior to execution to catch syntax errors and prevent runtime failures.\n- Use parameterized values or safe input handling to mitigate injection risks and enhance security.\n- Support and correctly parse nested expressions to handle complex query scenarios.\n- Implement graceful handling for cases where the expression yields no matching records, avoiding errors.\n- Optimize expressions for performance by limiting complexity and avoiding overly broad criteria.\n- Provide clear error messages or feedback when expressions are invalid or unsupported.\n- Consider caching frequently used expressions to improve lookup efficiency.\n\n**Examples**\n- `\"status = 'Open' AND type = 'Sales Order'\"`\n- `\"createdDate >= '2023-01-01' AND createdDate <= '2023-12-31'\"`\n- `\"customer.internalId = '12345' OR customer.email = 'example@example.com'\"`\n- `\"NOT (status = 'Closed' OR status = 'Cancelled')\"`\n- `\"amount > 1000 AND (priority = 'High' OR priority = 'Urgent')\"`\n- `\"item.category = 'Electronics' AND quantity >= 10\"`\n- `\"lastModifiedDate >"}},"description":"internalIdLookup is a boolean property that specifies whether the lookup operation should utilize the unique internal ID assigned by NetSuite to an entity for record retrieval. When set to true, the system treats the provided identifier strictly as this immutable internal ID, enabling precise and unambiguous access to the exact record. This approach is essential in scenarios demanding high accuracy and reliability, as internal IDs are guaranteed to be unique within the NetSuite environment and remain consistent over time. If set to false or omitted, the lookup defaults to using external identifiers, display names, or other non-internal keys, which may be less precise and could lead to multiple matches or ambiguity in the results.\n\n**Field behavior**\n- Activates strict matching using NetSuite’s unique internal ID when true.\n- Defaults to external or display identifier matching when false or not specified.\n- Changes the lookup logic and matching criteria based on the flag’s value.\n- Ensures efficient, accurate, and unambiguous record retrieval by leveraging stable internal identifiers.\n\n**Implementation guidance**\n- Enable only when the exact internal ID of the target record is known, verified, and appropriate for the lookup.\n- Avoid setting to true if only external or display identifiers are available to prevent failed or incorrect lookups.\n- Confirm that the internal ID corresponds to the correct record type to avoid mismatches or errors.\n- Validate the format and existence of the internal ID before performing the lookup operation.\n- Implement comprehensive error handling to manage cases where the internal ID is invalid, missing, or does not correspond to any record.\n\n**Examples**\n- `internalIdLookup: true` — Retrieve a customer record by its NetSuite internal ID for precise identification.\n- `internalIdLookup: false` — Search for an inventory item using its external SKU or descriptive name.\n- Omitting `internalIdLookup` defaults to false, causing the system to perform lookups based on external or display identifiers.\n\n**Important notes**\n- Internal IDs are stable, unique, and system-generated identifiers within NetSuite, providing the most reliable reference for records.\n- Using internal IDs can improve lookup performance and reduce ambiguity compared to relying on external or display identifiers.\n- Incorrectly setting this flag to true with non-internal IDs will likely result in lookup failures or no matching records found.\n- This property is specific to NetSuite integrations and may not be applicable or recognized in other systems or contexts.\n- Ensure synchronization between the internal ID used and the expected record type to maintain data integrity and"},"preferences":{"type":"object","properties":{"ignoreReadOnlyFields":{"type":"boolean","description":"ignoreReadOnlyFields specifies whether the system should bypass any attempts to modify fields designated as read-only during data processing or update operations. When set to true, the system silently ignores changes to these protected fields, preventing errors or exceptions that would otherwise occur if such modifications were attempted. This behavior facilitates smoother integration and more resilient data handling, especially in environments where partial updates are common or where read-only constraints are strictly enforced. Conversely, when set to false, the system will attempt to update all fields, including those marked as read-only, which may result in errors or exceptions if such modifications are not permitted.  \n**Field behavior:**  \n- Controls whether read-only fields are excluded from update or modification attempts.  \n- When true, modifications to read-only fields are skipped silently without raising errors or exceptions.  \n- When false, attempts to modify read-only fields may cause errors, exceptions, or transaction failures.  \n- Helps maintain system stability by preventing unauthorized or unsupported changes to protected data.  \n- Influences how partial updates and patch operations are processed in the presence of read-only constraints.  \n- Affects error reporting and feedback mechanisms during data validation and update workflows.  \n\n**Implementation guidance:**  \n- Enable this flag to enhance robustness and fault tolerance when integrating with APIs or systems enforcing read-only field restrictions.  \n- Use in scenarios where it is acceptable or expected that read-only fields remain unchanged during update operations.  \n- Ensure downstream business logic, validation rules, and data consistency checks accommodate the possibility that some fields may not be updated.  \n- Carefully evaluate the impact on data integrity, audit requirements, and compliance before enabling this behavior.  \n- Coordinate with audit, logging, and monitoring mechanisms to accurately capture which fields were updated, skipped, or ignored.  \n- Consider the implications for user feedback and error handling in client applications consuming the API.  \n\n**Examples:**  \n- ignoreReadOnlyFields: true — Update operations will silently skip read-only fields, avoiding errors and allowing partial updates to succeed.  \n- ignoreReadOnlyFields: false — Update operations will attempt to modify all fields, potentially triggering errors or exceptions if read-only fields are included.  \n- In a bulk update scenario, setting this to true prevents the entire batch from failing due to attempts to modify read-only fields.  \n- When performing a patch request, enabling this flag allows the system to apply only permissible changes without interruption.  \n\n**Important notes:**  \n- Setting this flag to true does not grant permission to"},"warningAsError":{"type":"boolean","description":"Indicates whether warnings encountered during processing should be escalated and treated as errors, causing the operation to fail immediately upon any warning detection. This setting enforces stricter validation and operational integrity by preventing processes from completing successfully if any warnings arise, thereby ensuring higher data quality, compliance standards, and system reliability. When enabled, it transforms non-critical warnings into blocking errors, which can halt workflows, trigger rollbacks, or initiate error handling routines. This behavior is particularly valuable in environments where data accuracy and regulatory adherence are paramount, but it may also increase failure rates and require more robust error management and user communication strategies.\n\n**Field behavior**\n- When set to true, any warning triggers an error state, halting or failing the operation immediately.\n- When set to false, warnings are logged or reported but do not interrupt the workflow or cause failure.\n- Influences validation, data processing, integration, and transactional workflows where warnings might otherwise be non-blocking.\n- Affects error reporting mechanisms and may alter the flow of retries, rollbacks, or aborts.\n- Can change the overall system behavior by elevating the severity of warnings to errors.\n- Impacts downstream processes by potentially preventing partial or inconsistent data states.\n\n**Implementation guidance**\n- Use to enforce strict data integrity and prevent continuation of processes with potential issues.\n- Assess the impact on user experience, as enabling this may increase failure rates and necessitate enhanced error handling and user notifications.\n- Ensure client applications and integrations are designed to handle errors resulting from escalated warnings gracefully.\n- Clearly document the default behavior when this flag is unset to avoid ambiguity for users and developers.\n- Coordinate with logging, monitoring, and alerting systems to provide clear diagnostics and traceability when warnings are treated as errors.\n- Consider providing configuration options at multiple levels (user, project, global) to allow flexible enforcement.\n- Test thoroughly in staging environments to understand the operational impact before enabling in production.\n\n**Examples**\n- warningAsError: true — Data import fails immediately if any warnings are detected during validation, preventing partial or potentially corrupt data ingestion.\n- warningAsError: false — Warnings during data validation are logged but do not stop the import process, allowing for continued operation despite minor issues.\n- warningAsError: true — Integration workflows abort on any warning to maintain strict compliance with regulatory or business rules.\n- warningAsError: false — Batch processing continues despite warnings, with issues flagged for later review.\n- warningAsError: true —"},"skipCustomMetadataRequests":{"type":"boolean","description":"Indicates whether to bypass requests for custom metadata during API operations, enabling optimized performance by reducing unnecessary data retrieval and processing when custom metadata is not required. This flag allows fine-tuning of the API's behavior to specific use cases by controlling the inclusion of potentially resource-intensive custom metadata. By skipping these requests, the system can achieve faster response times, lower bandwidth usage, and reduced computational overhead, especially beneficial in high-throughput or performance-sensitive scenarios. Conversely, including custom metadata ensures comprehensive data context, which is critical for operations requiring full validation, auditing, or enrichment.  \n**Field behavior:**  \n- When set to true, the system omits fetching and processing any custom metadata associated with the request, improving efficiency and reducing payload size.  \n- When set to false or left unspecified, the system retrieves and processes all relevant custom metadata, ensuring complete data context.  \n- Directly influences the amount of data transmitted and processed, impacting API responsiveness and resource utilization.  \n- Affects downstream workflows that depend on the presence or absence of custom metadata for decision-making or data integrity.  \n**Implementation guidance:**  \n- Employ this flag to optimize performance in scenarios where custom metadata is unnecessary, such as bulk data exports, read-only queries, or environments with stable metadata.  \n- Assess the potential impact on data completeness, validation, auditing, and enrichment to avoid compromising critical business logic.  \n- Clearly communicate the default behavior when the property is omitted to prevent misunderstandings among API consumers.  \n- Enforce boolean validation to maintain consistent and predictable API behavior.  \n- Consider how this setting interacts with caching mechanisms, data synchronization, and other metadata-related preferences to ensure coherent system behavior.  \n**Examples:**  \n- `true`: Skip all custom metadata requests to expedite processing and minimize resource consumption in performance-critical workflows.  \n- `false`: Include custom metadata requests to obtain full metadata details necessary for thorough data handling and validation.  \n**Important notes:**  \n- Skipping custom metadata may lead to incomplete data contexts if downstream processes rely on that metadata for essential functions like validation or auditing.  \n- This setting is most effective when metadata is static, infrequently changed, or irrelevant to the current operation.  \n- Altering this flag can influence caching strategies, data consistency, and synchronization across distributed systems.  \n- Ensure all relevant stakeholders understand the implications of toggling this flag to maintain data integrity and operational correctness.  \n**Dependency chain:**  \n- Relies on the existence and"}},"description":"An object encapsulating the user's customizable settings and options within the NetSuite environment, enabling a highly personalized and efficient user experience. This object encompasses a broad spectrum of configurable preferences including interface layout, language, timezone, notification methods, default currencies, date and time formats, themes, and other user-specific options that directly influence how the system behaves, appears, and interacts with the individual user. Preferences are designed to be flexible, supporting inheritance from role-based or system-wide defaults while allowing users to override settings to suit their unique workflows and requirements. The preferences object supports dynamic retrieval and partial updates, ensuring that changes can be made granularly without affecting unrelated settings, thereby maintaining data integrity and user experience consistency.\n\n**Field behavior**\n- Contains user-specific configuration settings that tailor the NetSuite experience to individual needs and roles.\n- Includes preferences related to UI layout, notification settings, default currencies, date/time formats, language, themes, and other personalization options.\n- Supports dynamic retrieval and partial updates, enabling users to modify individual preferences without overwriting the entire object.\n- Allows inheritance of preferences from role-based or system-wide defaults when user-specific settings are not explicitly defined.\n- Changes to preferences immediately affect the user interface, notification delivery, data presentation, and overall system behavior.\n- Preferences persist across sessions and devices, ensuring a consistent user experience.\n- Supports both simple scalar values and complex nested structures to accommodate diverse configuration needs.\n\n**Implementation guidance**\n- Structure as a nested JSON object with clearly defined and documented sub-properties grouped by categories such as notifications, display settings, localization, and system defaults.\n- Validate all input during updates to ensure data integrity, prevent invalid configurations, and maintain system stability.\n- Implement partial update mechanisms (e.g., PATCH semantics) to allow granular and efficient modifications.\n- Enforce strict access controls to ensure only authorized users can view or modify preferences.\n- Consider versioning the preferences schema to support backward compatibility and future enhancements.\n- Provide comprehensive documentation for each sub-property to facilitate correct usage, integration, and maintenance.\n- Optimize retrieval and update operations for performance, especially in environments with large user bases.\n- Ensure compatibility with role-based access controls and system-wide default settings to maintain coherent preference hierarchies.\n\n**Examples**\n- `{ \"language\": \"en-US\", \"timezone\": \"America/New_York\", \"currency\": \"USD\", \"notifications\": { \"email\": true, \"sms\": false } }`\n- `{ \"dashboardLayout\": \"compact\", \""},"file":{"type":"object","properties":{"name":{"type":"string","description":"The name of the file as it will be identified, stored, and managed within the NetSuite system. This filename serves as the primary unique identifier for the file within its designated folder or context, ensuring precise referencing, retrieval, and manipulation across various NetSuite modules, integrations, and user interfaces. It is critical that the filename is clear, descriptive, and relevant, as it is used not only for backend operations but also for display purposes in reports, dashboards, and file listings. The filename must strictly adhere to NetSuite’s naming conventions and restrictions to prevent conflicts, errors, or access issues, including limitations on character usage and length. Proper naming facilitates efficient file organization, version control, and seamless integration with external systems.\n\n**Field behavior**\n- Acts as the unique identifier for the file within its folder or storage context, preventing duplication.\n- Used extensively in file retrieval, linking, display, and management operations throughout NetSuite.\n- Must be unique within the folder to avoid overwriting or access conflicts.\n- Supports inclusion of standard file extensions to indicate file type and enable appropriate handling.\n- Case sensitivity may vary depending on the operating environment and integration specifics.\n- Represents only the filename without any directory or path information.\n- Changes to the filename after creation can impact existing references or integrations.\n\n**Implementation guidance**\n- Choose a concise, descriptive, and meaningful name to facilitate easy identification and management.\n- Avoid special characters, symbols, or reserved characters (e.g., \\ / : * ? \" < > |) disallowed by NetSuite’s file naming rules.\n- Ensure the filename length complies with NetSuite’s maximum character limit (typically 255 characters).\n- Always include the appropriate file extension (e.g., .pdf, .docx, .png) to clearly denote the file format.\n- Exclude any path separators, folder names, or hierarchical data—only the filename itself should be provided.\n- Adopt consistent naming conventions that support version control, categorization, or organizational standards.\n- Validate filenames programmatically where possible to enforce compliance and prevent errors.\n- Consider the impact of renaming files on existing workflows and references before making changes.\n\n**Examples**\n- \"invoice_2024_06.pdf\"\n- \"project_plan_v2.docx\"\n- \"company_logo.png\"\n- \"financial_report_Q1_2024.xlsx\"\n- \"user_manual_v1.3.pdf\"\n\n**Important notes**\n- The filename must not contain directory or path information; it strictly represents"},"fileType":{"type":"string","description":"The fileType property specifies the precise format or category of the file managed within the NetSuite file system, such as PDF, CSV, JPEG, or other supported types. This designation is essential because it directly affects how the system processes, displays, stores, and interacts with the file throughout its lifecycle. Correctly identifying the fileType ensures that files are rendered correctly in the user interface, processed through appropriate workflows, and subjected to any file-specific restrictions, permissions, or validations. It is typically required when uploading or creating new files to guarantee compatibility, prevent errors during file operations, and maintain data integrity. Additionally, the fileType influences system behaviors such as indexing, searchability, and integration with other NetSuite modules or external systems.\n\n**Field behavior**\n- Defines the file’s format or category, governing system handling, processing, and display.\n- Determines rendering, storage methods, and permissible manipulations within NetSuite.\n- Enables or restricts specific operations based on the file type’s inherent capabilities and limitations.\n- Usually mandatory during file creation or upload to ensure accurate system recognition and handling.\n- Influences validation checks, error handling, and workflow routing during file operations.\n- Affects integration points with other NetSuite features, such as reporting, scripting, or external API interactions.\n\n**Implementation guidance**\n- Validate the fileType against a predefined list of supported NetSuite file types to ensure compatibility and prevent errors.\n- Ensure the fileType accurately reflects the actual content format of the file to avoid processing failures or data corruption.\n- Prefer using standardized MIME types or NetSuite-specific identifiers where applicable for consistency and interoperability.\n- Implement robust error handling to gracefully manage unsupported, unrecognized, or mismatched file types.\n- Consider system versioning, configuration differences, and customizations that may affect supported file types.\n- When changing fileType post-creation, ensure appropriate reprocessing or format conversion to maintain data integrity.\n\n**Examples**\n- \"PDF\" for Portable Document Format files commonly used for documents.\n- \"CSV\" for Comma-Separated Values files frequently used for data exchange.\n- \"JPG\" or \"JPEG\" for image files in JPEG format.\n- \"PLAINTEXT\" for plain text files without formatting.\n- \"EXCEL\" for Microsoft Excel spreadsheet files.\n- \"XML\" for Extensible Markup Language files used in data interchange.\n- \"ZIP\" for compressed archive files.\n\n**Important notes**\n- Consistency between the fileType and the actual file content"},"folder":{"type":"string","description":"The identifier or path of the folder within the NetSuite file cabinet where the file is stored or intended to be stored. This property precisely defines the file's storage location, enabling organized management, categorization, and efficient retrieval within the NetSuite environment. It supports specification either as a unique numeric folder ID or as a string representing the folder path, accommodating both absolute and relative references depending on the API context. Proper assignment ensures files are correctly placed within the hierarchical folder structure, facilitating access control, streamlined file operations, and maintaining organizational consistency.\n\n**Field behavior**\n- Specifies the exact folder location for storing or moving the file within the NetSuite file cabinet.\n- Accepts either a numeric folder ID for unambiguous identification or a string folder path for hierarchical referencing.\n- Mandatory when uploading new files or relocating existing files to define their destination.\n- Optional during file metadata retrieval if folder context is implicit or not required.\n- Updating this property on an existing file triggers relocation to the specified folder.\n- Influences file visibility and access permissions based on folder-level security settings.\n- Supports both absolute and relative folder path formats depending on API usage context.\n\n**Implementation guidance**\n- Confirm the target folder exists and is accessible within the NetSuite file cabinet before assignment.\n- Prefer using the internal numeric folder ID to avoid ambiguity and ensure precise targeting.\n- Support both absolute and relative folder paths where the API context allows.\n- Enforce permission validation to verify that the user or integration has adequate rights to access or modify the folder.\n- Normalize folder paths to comply with NetSuite’s hierarchical structure and naming conventions.\n- Provide informative error responses if the folder is invalid, non-existent, or access is denied.\n- When moving files by updating this property, ensure that any dependent metadata or references are updated accordingly.\n\n**Examples**\n- \"123\" (numeric folder ID representing a specific folder)\n- \"/Documents/Invoices\" (string folder path indicating a nested folder structure)\n- \"456\" (numeric folder ID for a project-specific folder)\n- \"Shared/Marketing/Assets\" (relative folder path within the file cabinet hierarchy)\n\n**Important notes**\n- Folder IDs are unique within the NetSuite account and are the recommended method for precise folder referencing.\n- Folder paths must conform to NetSuite’s naming standards and reflect the correct folder hierarchy.\n- Changing the folder property on an existing file effectively moves the file within the file cabinet.\n- Adequate permissions are required to access or modify the specified folder;"},"folderInternalId":{"type":"string","description":"The internal identifier of the folder within the NetSuite file cabinet where the file is stored. This unique, system-generated integer ID precisely specifies the folder location, enabling accurate file operations such as upload, download, retrieval, and organization. It is essential for associating files with their respective folders and maintaining the hierarchical structure of the file cabinet, ensuring consistent file management and access control.\n\n**Field behavior**\n- Represents a unique, system-assigned internal ID for a folder in the NetSuite file cabinet.\n- Required to specify or retrieve the exact folder location during file management operations.\n- Must correspond to an existing folder within the NetSuite account to ensure valid file placement.\n- Changing this ID effectively moves the file to a different folder within the file cabinet.\n- Permissions on the folder influence access and operations on the associated files.\n- The field is mandatory when creating or updating a file to define its storage location.\n- Supports hierarchical folder structures by linking files to nested folders via their internal IDs.\n\n**Implementation guidance**\n- Validate that the folderInternalId is a valid integer and corresponds to an existing folder before use.\n- Retrieve valid folder internal IDs through NetSuite’s SuiteScript, REST API, or UI to avoid errors.\n- Implement error handling for cases where the folderInternalId is missing, invalid, or points to a restricted folder.\n- Update this ID carefully, as it changes the file’s folder location and may affect file accessibility.\n- Ensure that the user or integration has appropriate permissions to access or modify the target folder.\n- When moving files between folders, confirm that the destination folder supports the file type and intended use.\n- Cache or store folderInternalId values cautiously to prevent stale references in dynamic folder structures.\n\n**Examples**\n- 12345\n- 67890\n- 101112\n\n**Important notes**\n- This ID is distinct from the folder name; it is a system-generated unique identifier.\n- Modifying the folderInternalId moves the file to a different folder, impacting file organization.\n- Folder permissions and sharing settings can restrict or enable file operations based on this ID.\n- Accurate use of this ID is critical for maintaining the integrity of the file cabinet’s folder hierarchy.\n- The folderInternalId cannot be arbitrarily assigned; it must be obtained from NetSuite’s system.\n- Changes to folder structure or deletion of folders may invalidate existing folderInternalId references.\n\n**Dependency chain**\n- Depends on the existence and structure of folders within the NetSuite"},"internalId":{"type":"string","description":"Unique identifier assigned internally to a file within the NetSuite system, serving as the primary key for that file record. This identifier is essential for programmatic referencing and management of the file, enabling precise operations such as retrieval, update, or deletion. The internalId is immutable once assigned and guaranteed to be unique across all file records, ensuring accurate and consistent identification. It is automatically generated by NetSuite upon file creation and is consistently used across both REST and SOAP API endpoints to perform file-related actions. This identifier is critical for maintaining data integrity, enabling seamless integration, and supporting reliable file management workflows within the NetSuite ecosystem.\n\n**Field behavior**\n- Acts as the definitive and immutable identifier for a file within NetSuite.\n- Must be unique for each file record to prevent conflicts.\n- Required for all API operations targeting a specific file, including retrieval, updates, and deletions.\n- Cannot be manually assigned, duplicated, or reassigned to another file.\n- Remains constant throughout the lifecycle of the file record.\n\n**Implementation guidance**\n- Always retrieve the internalId from NetSuite responses after file creation or queries.\n- Use the internalId exclusively for all subsequent API calls involving the file.\n- Avoid any manual assignment or modification of the internalId to prevent inconsistencies.\n- Validate that the internalId conforms to NetSuite’s expected format before use in API calls.\n- Handle errors gracefully when an invalid or non-existent internalId is provided.\n\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"1001\"\n\n**Important notes**\n- Providing an incorrect or non-existent internalId will result in API errors or failed operations.\n- The internalId is distinct and separate from the file name, file path, or any external identifiers.\n- It plays a critical role in maintaining data integrity and ensuring accurate file referencing within NetSuite.\n- The internalId cannot be reused or recycled for different files.\n\n**Dependency chain**\n- Requires the existence of the corresponding file record within NetSuite.\n- Utilized by API methods such as get, update, and delete for file management operations.\n- May be referenced by other NetSuite entities or records that link to the file.\n- Dependent on NetSuite’s internal database and indexing mechanisms for uniqueness and retrieval.\n\n**Technical details**\n- Represented as a string or numeric value depending on the API context.\n- Automatically assigned by NetSuite during the file creation process.\n- Stored as the primary key in the Net"},"backupFolderInternalId":{"type":"string","description":"backupFolderInternalId is the unique internal identifier assigned by NetSuite to a specific folder within the file cabinet designated exclusively for storing backup files. This identifier is essential for accurately locating, managing, and organizing backup data within the system’s hierarchical file storage structure. It ensures that all backup operations—such as file creation, modification, and retrieval—are precisely targeted to the correct folder, thereby facilitating reliable data preservation, streamlined backup workflows, and efficient recovery processes.  \n**Field behavior:**  \n- Uniquely identifies the backup folder within the NetSuite file cabinet, eliminating ambiguity in folder selection.  \n- Used to specify or retrieve the exact storage location for backup files during automated or manual backup operations.  \n- Typically assigned and referenced during backup configuration, file management tasks, or programmatic interactions with the NetSuite API.  \n- Enables seamless integration with automated backup processes by directing files to the intended folder without manual intervention.  \n**Implementation guidance:**  \n- Verify that the internal ID corresponds to an existing, active, and accessible folder within the NetSuite file cabinet before use.  \n- Ensure the designated folder has appropriate permissions set to allow creation, modification, and retrieval of backup files.  \n- Avoid using internal IDs of folders that are restricted, archived, deleted, or otherwise unsuitable for active backup storage to prevent errors.  \n- Maintain consistent use of this ID across all API calls, scripts, and backup configurations to uphold data integrity and organizational consistency.  \n**Examples:**  \n- \"12345\" (example of a valid internal folder ID)  \n- \"67890\"  \n- \"34567\"  \n**Important notes:**  \n- This identifier is internal to NetSuite and does not correspond to external folder names, display names, or file system paths.  \n- Changing this ID will redirect backup files to a different folder, which may disrupt existing backup workflows or data organization.  \n- Proper folder permissions are essential to avoid backup failures, access denials, or data loss.  \n- Once set for a given backup configuration, this ID should be treated as immutable unless a deliberate and well-documented change is required.  \n**Dependency chain:**  \n- Requires the target folder to exist within the NetSuite file cabinet and be properly configured for backup storage.  \n- Interacts closely with backup configuration settings, file management APIs, and NetSuite’s permission and security models.  \n- Dependent on NetSuite’s internal folder management system to maintain uniqueness and integrity of folder"}},"description":"Represents a comprehensive file object within the NetSuite system, serving as a fundamental entity for uploading, storing, managing, and referencing a wide variety of file types including documents, images, spreadsheets, audio, video, and other media formats. This object facilitates seamless integration and association of file data with NetSuite records, transactions, custom entities, and workflows, enabling efficient document management, retrieval, and automation across the platform. It supports detailed metadata management such as file name, type, size, folder location, internal identifiers, encoding formats, and timestamps, ensuring precise control, traceability, and auditability of files. The file object can represent both files stored internally in the NetSuite file Cabinet and externally referenced files via URLs or other means, providing flexibility in file handling and access. It enforces platform-specific constraints on file size, type, and permissions to maintain system integrity, security, and compliance with organizational policies. Additionally, it supports versioning and updates to existing files, allowing for effective file lifecycle management within NetSuite.\n\n**Field behavior**\n- Used to specify, upload, retrieve, update, or link files associated with NetSuite records, transactions, custom entities, or workflows.\n- Supports a broad spectrum of file types including PDFs, images (JPEG, PNG, GIF), spreadsheets (Excel, CSV), text documents, audio, video, and other media formats.\n- Can represent files stored internally within the NetSuite file Cabinet or externally referenced files via URLs or other means.\n- Enables attachment of files to records to provide enhanced context, documentation, and audit trails.\n- Manages comprehensive file metadata including internalId, name, fileType, size, folder location, encoding, and timestamps.\n- Supports versioning and updates to existing files where applicable.\n- Enforces platform-specific restrictions on file size, type, and access permissions.\n- Facilitates secure file access and sharing based on user roles and permissions.\n\n**Implementation guidance**\n- Include complete and accurate metadata such as internalId, name, fileType, size, folder, encoding, and timestamps to ensure reliable file referencing and management.\n- Validate file types and sizes against NetSuite’s platform restrictions prior to upload to prevent errors and ensure compliance.\n- Use appropriate encoding methods (e.g., base64) for file content during API transmission to maintain data integrity.\n- Employ multipart/form-data encoding for file uploads when required by the API specifications.\n- Ensure users have the necessary roles and permissions to upload, retrieve, or modify files"}}},"NetsuiteDistributed":{"type":"object","description":"Configuration for NetSuite Distributed (SuiteApp 2.0) import operations.\nThis is the primary sub-schema for NetSuiteDistributedImport adaptorType.\nThe API field name is \"netsuite_da\".\n\n**Key fields**\n- operation: REQUIRED — the NetSuite operation (add, update, addupdate, etc.)\n- recordType: REQUIRED — the NetSuite record type (e.g., \"customer\", \"salesOrder\")\n- mapping: field mappings from source data to NetSuite fields\n- internalIdLookup: how to find existing records for update/addupdate/delete\n- restletVersion: defaults to \"suiteapp2.0\", rarely needs to be changed\n","properties":{"operation":{"type":"string","enum":["add","update","addupdate","attach","detach","delete"],"description":"The NetSuite operation to perform. REQUIRED. This is a plain string value, NOT an object.\n\n**Valid values**\n- \"add\" — Create new records only.\n- \"update\" — Update existing records. Requires internalIdLookup.\n- \"addupdate\" — Upsert: update if found, create if not. Requires internalIdLookup. Most common.\n- \"attach\" — Attach a record to another record.\n- \"detach\" — Detach a record from another record.\n- \"delete\" — Delete records. Requires internalIdLookup.\n\n**Guidance**\n- Default to \"addupdate\" when user says \"sync\", \"upsert\", or \"create or update\".\n- Use \"add\" when user says \"create\" or \"insert\" without mentioning updates.\n- For \"update\", \"addupdate\", and \"delete\", you MUST also set internalIdLookup.\n\n**Important**\nThis field is a string, NOT an object. Set it directly: \"operation\": \"addupdate\"\nDo NOT send {\"type\": \"addupdate\"} — that causes a Cast error.\n"},"recordType":{"type":"string","description":"The NetSuite record type to import into. REQUIRED.\n\nMust match a valid NetSuite record type identifier.\n\n**Examples**\n- \"customer\"\n- \"salesOrder\"\n- \"invoice\"\n- \"purchaseOrder\"\n- \"vendorBill\"\n- \"itemFulfillment\"\n- \"employee\"\n- \"inventoryItem\"\n- \"customrecord_myrecord\" (custom record types)\n"},"recordIdentifier":{"type":"string","description":"Custom record identifier, used to identify the specific record type when\nimporting into custom record types.\n"},"recordTypeId":{"type":"string","description":"The internal record type ID. Used for custom record types in NetSuite where\nthe numeric ID is needed in addition to the recordType string.\n"},"restletVersion":{"type":"string","enum":["suitebundle","suiteapp1.0","suiteapp2.0"],"description":"The version of the NetSuite RESTlet to use. This is a plain STRING, NOT an object.\n\nDefaults to \"suiteapp2.0\" when useSS2Restlets is true, \"suitebundle\" otherwise.\nAlmost always \"suiteapp2.0\" for modern integrations — rarely needs to be explicitly set.\n\nIMPORTANT: Set as a plain string value, e.g. \"suiteapp2.0\"\nDo NOT send {\"type\": \"suiteapp2.0\"} — that causes a Cast error.\n"},"useSS2Restlets":{"type":"boolean","description":"Whether to use SuiteScript 2.0 RESTlets. Defaults to true for modern integrations.\nWhen true, restletVersion defaults to \"suiteapp2.0\".\n"},"missingOrCorruptedDAConfig":{"type":"boolean","description":"Flag indicating whether the Distributed Adaptor configuration is missing or corrupted.\nSet by the system — do not set manually.\n"},"batchSize":{"type":"number","description":"Number of records to process per batch. Controls how many records are sent\nto NetSuite in a single API call. Typical values: 50-200.\n"},"internalIdLookup":{"type":"object","description":"Configuration for looking up existing NetSuite records by internal ID.\nREQUIRED when operation is \"update\", \"addupdate\", or \"delete\".\n\nDefines how to find existing records in NetSuite to match against incoming data.\n","properties":{"extract":{"type":"string","description":"The path in the source record to extract the lookup value from.\nExample: \"internalId\" or \"externalId\"\n"},"searchField":{"type":"string","description":"The NetSuite field to search against.\nExample: \"externalId\", \"email\", \"name\", \"tranId\"\n"},"operator":{"type":"string","description":"The comparison operator for the lookup.\nExample: \"is\", \"contains\", \"startswith\"\n"},"expression":{"type":"string","description":"A NetSuite search expression for complex lookup conditions.\nUsed for multi-field or conditional lookups.\n"}}},"hooks":{"type":"object","description":"Script hooks for custom processing at different stages of the import.\nEach hook references a SuiteScript file and function.\n","properties":{"preMap":{"type":"object","description":"Runs before field mapping is applied.","properties":{"fileInternalId":{"type":"string","description":"NetSuite internal ID of the SuiteScript file."},"function":{"type":"string","description":"Name of the function to execute."},"configuration":{"type":"object","description":"Configuration object passed to the hook function."}}},"postMap":{"type":"object","description":"Runs after field mapping, before submission to NetSuite.","properties":{"fileInternalId":{"type":"string","description":"NetSuite internal ID of the SuiteScript file."},"function":{"type":"string","description":"Name of the function to execute."},"configuration":{"type":"object","description":"Configuration object passed to the hook function."}}},"postSubmit":{"type":"object","description":"Runs after the record is submitted to NetSuite.","properties":{"fileInternalId":{"type":"string","description":"NetSuite internal ID of the SuiteScript file."},"function":{"type":"string","description":"Name of the function to execute."},"configuration":{"type":"object","description":"Configuration object passed to the hook function."}}}}},"mapping":{"type":"object","description":"Field mappings that define how source data fields map to NetSuite record fields.\nContains both body-level fields and sublist (line-item) fields.\n","properties":{"fields":{"type":"array","items":{"type":"object","properties":{"extract":{"type":"string","description":"Path in the source record to extract the value from.\nUse dot notation for nested fields (e.g., \"address.city\").\n"},"generate":{"type":"string","description":"The NetSuite field ID to write the value to (e.g., \"companyname\", \"email\", \"subsidiary\").\n"},"hardCodedValue":{"type":"string","description":"A static value to always use instead of extracting from source data.\nMutually exclusive with extract.\n"},"lookupName":{"type":"string","description":"Reference to a lookup defined in netsuite_da.lookups by name."},"dataType":{"type":"string","description":"Data type hint for the field value (e.g., \"string\", \"number\", \"date\", \"boolean\").\n"},"internalId":{"type":"boolean","description":"Whether the value is a NetSuite internal ID reference."},"immutable":{"type":"boolean","description":"When true, this field is only set on record creation, not on updates."},"discardIfEmpty":{"type":"boolean","description":"When true, skip this field mapping if the extracted value is empty."},"extractDateFormat":{"type":"string","description":"Date format of the extracted value (e.g., \"MM/DD/YYYY\", \"ISO8601\").\nUsed to parse date strings from source data.\n"},"extractDateTimezone":{"type":"string","description":"Timezone of the extracted date value (e.g., \"America/New_York\")."},"subRecordMapping":{"type":"object","description":"Nested mapping for NetSuite subrecord fields (e.g., address, inventory detail)."},"conditional":{"type":"object","description":"Conditional logic for when to apply this field mapping.\n","properties":{"lookupName":{"type":"string","description":"Lookup to evaluate for the condition."},"when":{"type":"string","enum":["record_created","record_updated","extract_not_empty","lookup_not_empty","lookup_empty","ignore_if_set"],"description":"When to apply this mapping:\n- record_created: only on new records\n- record_updated: only on existing records\n- extract_not_empty: only when extracted value is non-empty\n- lookup_not_empty: only when lookup returns a value\n- lookup_empty: only when lookup returns no value\n- ignore_if_set: skip if field already has a value\n"}}}}},"description":"Body-level field mappings. Each entry maps a source field to a NetSuite body field.\n"},"lists":{"type":"array","items":{"type":"object","properties":{"generate":{"type":"string","description":"The NetSuite sublist ID (e.g., \"item\" for sales order line items,\n\"addressbook\" for address sublists).\n"},"jsonPath":{"type":"string","description":"JSON path in the source data that contains the array of sublist records.\n"},"fields":{"type":"array","items":{"type":"object","properties":{"extract":{"type":"string","description":"Path in the sublist record to extract the value from."},"generate":{"type":"string","description":"NetSuite sublist field ID to write to."},"hardCodedValue":{"type":"string","description":"Static value for this sublist field."},"lookupName":{"type":"string","description":"Reference to a lookup by name."},"dataType":{"type":"string","description":"Data type hint for the field value."},"internalId":{"type":"boolean","description":"Whether the value is a NetSuite internal ID reference."},"isKey":{"type":"boolean","description":"Whether this field is a key field for matching existing sublist lines."},"immutable":{"type":"boolean","description":"Only set on record creation, not updates."},"discardIfEmpty":{"type":"boolean","description":"Skip if the extracted value is empty."},"extractDateFormat":{"type":"string","description":"Date format of the extracted value."},"extractDateTimezone":{"type":"string","description":"Timezone of the extracted date value."},"subRecordMapping":{"type":"object","description":"Nested mapping for subrecord fields within the sublist."},"conditional":{"type":"object","properties":{"lookupName":{"type":"string"},"when":{"type":"string","enum":["record_created","record_updated","extract_not_empty","lookup_not_empty","lookup_empty","ignore_if_set"]}}}}},"description":"Field mappings for each column in the sublist."}}},"description":"Sublist (line-item) mappings. Each entry maps source data to a NetSuite sublist.\n"}}},"lookups":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique name for this lookup, referenced by lookupName in field mappings.\n"},"recordType":{"type":"string","description":"NetSuite record type to search (e.g., \"customer\", \"item\").\n"},"searchField":{"type":"string","description":"NetSuite field to search against (e.g., \"email\", \"externalId\", \"name\").\n"},"resultField":{"type":"string","description":"Field from the lookup result to return (e.g., \"internalId\").\n"},"expression":{"type":"string","description":"NetSuite search expression for complex lookup conditions.\n"},"operator":{"type":"string","description":"Comparison operator (e.g., \"is\", \"contains\", \"startswith\").\n"},"includeInactive":{"type":"boolean","description":"Whether to include inactive records in lookup results."},"useDefaultOnMultipleMatches":{"type":"boolean","description":"When true, use the default value if multiple records match the lookup."},"allowFailures":{"type":"boolean","description":"When true, lookup failures do not stop the import."},"map":{"type":"object","description":"Static value mapping for lookup results (key-value pairs)."},"default":{"type":"string","description":"Default value when lookup returns no results."}}},"description":"Lookup definitions used to resolve reference values from NetSuite.\nReferenced by name from field mappings via the lookupName property.\n"},"rawOverride":{"type":"object","description":"Raw override object for advanced use cases. When useRawOverride is true,\nthis object is sent directly to the NetSuite API, bypassing normal mapping.\n"},"useRawOverride":{"type":"boolean","description":"When true, uses rawOverride instead of the normal mapping configuration.\n"},"isMigrated":{"type":"boolean","description":"System flag indicating whether this import was migrated from a legacy format.\n"},"preferences":{"type":"object","properties":{"ignoreReadOnlyFields":{"type":"boolean","description":"When true, silently skip read-only fields instead of raising errors."},"warningAsError":{"type":"boolean","description":"When true, treat NetSuite warnings as errors that stop the import."},"skipCustomMetadataRequests":{"type":"boolean","description":"When true, skip fetching custom field metadata to improve performance."}},"description":"Import behavior preferences that control how NetSuite handles the import operation.\n"},"retryUpdateAsAdd":{"type":"boolean","description":"When true, if an update fails because the record doesn't exist, automatically\nretry as an add operation. Useful for initial syncs where records may not exist yet.\n"},"isFileProvider":{"type":"boolean","description":"Whether this import handles files in the NetSuite File Cabinet.\n"},"file":{"type":"object","description":"File cabinet configuration for file-based imports into NetSuite.\n","properties":{"name":{"type":"string","description":"Filename for the file in NetSuite File Cabinet."},"fileType":{"type":"string","description":"NetSuite file type (e.g., \"PDF\", \"CSV\", \"PLAINTEXT\", \"EXCEL\", \"XML\").\n"},"folder":{"type":"string","description":"Folder path or name in the NetSuite File Cabinet."},"folderInternalId":{"type":"string","description":"Internal ID of the target folder in NetSuite File Cabinet."},"internalId":{"type":"string","description":"Internal ID of an existing file to update."},"backupFolderInternalId":{"type":"string","description":"Internal ID of a backup folder for file versioning."}}},"customFieldMetadata":{"type":"object","description":"Metadata about custom fields on the target record type.\nPopulated by the system from NetSuite metadata — do not set manually.\n"}}},"Rdbms":{"type":"object","description":"Configuration for RDBMS import operations. Used for database imports into SQL Server, MySQL, PostgreSQL, Snowflake, Oracle, MariaDB, and other relational databases.\n\n**Query type determines which fields are required**\n\n| queryType          | Required fields                | Do NOT set        |\n|--------------------|--------------------------------|-------------------|\n| [\"per_record\"]     | query (array of SQL strings)   | bulkInsert        |\n| [\"per_page\"]       | query (array of SQL strings)   | bulkInsert        |\n| [\"bulk_insert\"]    | bulkInsert object              | query             |\n| [\"bulk_load\"]      | bulkLoad object                | query             |\n\n**Critical:** query IS AN ARRAY OF STRINGS\nThe query field must be an array of plain strings, NOT a single string and NOT an array of objects.\nCorrect: [\"MERGE INTO target USING ...\"]\nWrong: \"MERGE INTO target ...\"\nWrong: [{\"query\": \"MERGE INTO target ...\"}]\n","properties":{"lookups":{"type":["string","null"],"description":"A collection of predefined reference data sets or key-value pairs designed to standardize, validate, and streamline input values across the application. These lookup entries serve as a centralized repository of commonly used values, enabling consistent data entry, minimizing errors, and facilitating efficient data retrieval and processing. They are essential for populating user interface elements such as dropdown menus, autocomplete fields, and for enforcing validation rules within business logic. Lookup data can be static or dynamically updated, and may support localization to accommodate diverse user bases. Additionally, lookups often include metadata such as descriptions, effective dates, and status indicators to provide context and support lifecycle management. They play a critical role in maintaining data integrity, enhancing user experience, and ensuring interoperability across different system modules and external integrations.\n\n**Field behavior**\n- Contains structured sets of reference data including codes, labels, enumerations, or mappings.\n- Drives UI components by providing selectable options and autocomplete suggestions.\n- Ensures data consistency by standardizing input values across different modules.\n- Supports both static and dynamic updates to reflect changes in business requirements.\n- May include metadata like descriptions, effective dates, or status indicators for enhanced context.\n- Facilitates localization and internationalization to support multiple languages and regions.\n- Enables validation logic by restricting inputs to predefined acceptable values.\n- Supports versioning to track changes and maintain historical data integrity.\n\n**Implementation guidance**\n- Organize lookup data as dictionaries, arrays, or database tables for efficient access and management.\n- Implement caching strategies to optimize performance and reduce redundant data retrieval.\n- Enforce uniqueness and relevance of lookup entries within their specific contexts.\n- Provide robust mechanisms for updating, versioning, and extending lookup data without disrupting system stability.\n- Incorporate localization and internationalization support for user-facing lookup values.\n- Secure sensitive lookup data with appropriate access controls and auditing.\n- Design APIs or interfaces for easy retrieval and management of lookup data.\n- Ensure synchronization of lookup data across distributed systems or microservices.\n\n**Examples**\n- Country codes and names: {\"US\": \"United States\", \"CA\": \"Canada\", \"MX\": \"Mexico\"}\n- Status codes representing entity states: {\"1\": \"Active\", \"2\": \"Inactive\", \"3\": \"Pending\"}\n- Product categories for e-commerce platforms: {\"ELEC\": \"Electronics\", \"FASH\": \"Fashion\", \"HOME\": \"Home & Garden\"}\n- Payment methods: {\"CC\": \"Credit Card\", \"PP\": \"PayPal\", \"BT\":"},"query":{"type":"array","items":{"type":"string"},"description":"Array of SQL query strings to execute. REQUIRED when queryType is [\"per_record\"], [\"per_page\"], or [\"first_page\"].\n\nEach element is a complete SQL statement as a plain string. Typically contains a single query.\n\nUse Handlebars to inject values from incoming records. RDBMS queries REQUIRE the `record.` prefix and triple braces `{{{ }}}`.\n\n**Format — array of strings**\nThis field is an ARRAY OF STRINGS, not a single string and not an array of objects.\n- CORRECT: [\"INSERT INTO users (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n- WRONG: \"INSERT INTO users ...\"  (not an array)\n- WRONG: [{\"query\": \"INSERT INTO users ...\"}]  (objects are invalid — causes Cast error)\n- WRONG: [\"INSERT INTO users (name) VALUES ('{{name}}')\"]  (missing record. prefix — will produce empty values)\n\n**Critical:** RDBMS HANDLEBARS SYNTAX\n- MUST use `record.` prefix: `{{{record.fieldName}}}`, NOT `{{{fieldName}}}`\n- MUST use triple braces `{{{ }}}` for value references — in RDBMS context, `{{record.field}}` outputs `'value'` (wrapped in single quotes), while `{{{record.field}}}` outputs `value` (raw). Triple braces give you control over quoting in your SQL.\n- Block helpers (`{{#each}}`, `{{#if}}`, `{{/each}}`) use double braces as normal.\n- Nested fields: `{{{record.properties.email}}}`\n\n**Examples**\n- INSERT: [\"INSERT INTO customers (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n- UPDATE: [\"UPDATE inventory SET qty = {{{record.quantity}}} WHERE sku = '{{{record.sku}}}'\"]\n- UPSERT (MySQL): [\"INSERT INTO users (id, name) VALUES ({{{record.id}}}, '{{{record.name}}}') ON DUPLICATE KEY UPDATE name = '{{{record.name}}}'\"]\n- MERGE (Snowflake): [\"MERGE INTO target USING (SELECT '{{{record.email}}}' AS email) AS src ON target.email = src.email WHEN MATCHED THEN UPDATE SET name = '{{{record.name}}}' WHEN NOT MATCHED THEN INSERT (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n\n**String vs numeric values in handlebars**\n- Strings: wrap in single quotes — '{{{record.name}}}'\n- Numbers: no quotes — {{{record.quantity}}}\n"},"queryType":{"type":["array","null"],"items":{"type":"string","enum":["INSERT","UPDATE","bulk_insert","per_record","per_page","first_page","bulk_load"]},"description":"Specifies the execution strategy for the SQL operation.\n\n**CRITICAL: This field DETERMINES whether `query` or `bulkInsert` is required.**\n\n**Preference order (use the highest-performance option the operation supports)**\n\n1. **`[\"bulk_load\"]`** — fastest. Stages data as a file then loads via COPY/bulk mechanism. Supports upsert via `primaryKeys`, and custom merge/ignore logic via `overrideMergeQuery`. **Currently Snowflake and NSAW.**\n2. **`[\"bulk_insert\"]`** — batch INSERT via multi-row VALUES. No upsert/ignore logic. All RDBMS types.\n3. **`[\"per_page\"]`** — one SQL statement per page of records. All RDBMS types.\n4. **`[\"per_record\"]`** — one SQL statement per record. Full control over individual record SQL. All RDBMS types.\n\nAlways prefer the highest option that satisfies the operation. `bulk_load` handles INSERT, UPSERT, and custom merge logic. `bulk_insert` is best for pure INSERTs when `bulk_load` isn't available. `per_page` handles custom SQL at batch level. `per_record` gives full control over individual record SQL.\n\n**Decision tree (Follow in order)**\n\n**STEP 1: Does the database support bulk_load?**\n- If Snowflake, NSAW, or another database with bulk_load support:\n  → **USE `[\"bulk_load\"]`** with:\n    - `bulkLoad.tableName` (required)\n    - `bulkLoad.primaryKeys` for upsert/merge (auto-generates MERGE)\n    - `bulkLoad.overrideMergeQuery: true` for custom SQL (ignore-existing, conditional updates, multi-table ops). Override SQL references `{{import.rdbms.bulkLoad.preMergeTemporaryTable}}` for the staging table.\n    - No `primaryKeys` for pure INSERT (auto-generates INSERT)\n\n**STEP 2: Database does NOT support bulk_load**\n- **Pure INSERT**: → **USE `[\"bulk_insert\"]`** (requires `bulkInsert` object)\n- **UPDATE / UPSERT / custom matching logic**: → **USE `[\"per_page\"]`** (requires `query` field). Only use `[\"per_record\"]` if the logic truly cannot be expressed as a batch operation.\n\n**Critical relationship to other fields**\n\n| queryType        | REQUIRES         | DO NOT SET       |\n|------------------|------------------|------------------|\n| `[\"per_record\"]` | `query` field    | `bulkInsert`     |\n| `[\"bulk_insert\"]`| `bulkInsert` obj | `query`          |\n\n**Do not use**\n- `[\"INSERT\"]` or `[\"UPDATE\"]` as standalone values\n- Multiple types combined\n\n**Examples**\n\n**Insert with ignoreExisting (MUST use per_record)**\nPrompt: \"Import customers, ignore existing based on email\"\n```json\n\"queryType\": [\"per_record\"],\n\"query\": [\"INSERT INTO customers (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n```\n\n**Pure insert (no duplicate checking)**\nPrompt: \"Insert all records into users table\"\n```json\n\"queryType\": [\"bulk_insert\"],\n\"bulkInsert\": { ... }\n```\n\n**Update Operation**\nPrompt: \"Update inventory counts\"\n```json\n\"queryType\": [\"per_record\"],\n\"query\": [\"UPDATE inventory SET qty = {{{record.qty}}} WHERE sku = '{{{record.sku}}}'\"]\n```\n\n**Run Once Per Page**\nPrompt: \"Execute this stored procedure for each page of results\"\n```json\n\"queryType\": [\"per_page\"]\n```\n**per_page uses a different Handlebars context than per_record.** The context is the entire batch (`batch_of_records`), not a single `record.` object. Loop through with `{{#each batch_of_records}}...{{{record.fieldName}}}...{{/each}}`.\n\n**Run Once at Start**\nPrompt: \"Run this cleanup script before processing\"\n```json\n\"queryType\": [\"first_page\"]\n```"},"bulkInsert":{"type":"object","description":"**CRITICAL: This object is ONLY used when `queryType` is `[\"bulk_insert\"]`.**\n\n✅ **SET THIS** when:\n- `queryType` is `[\"bulk_insert\"]`\n- Pure INSERT operation with NO ignoreExisting/duplicate checking\n\n❌ **DO NOT SET THIS** when:\n- `queryType` is `[\"per_record\"]` (use `query` field instead)\n- Operation involves UPDATE, UPSERT, or ignoreExisting logic\n\nIf the prompt mentions \"ignore existing\", \"skip duplicates\", or \"match on field\",\nuse `queryType: [\"per_record\"]` with the `query` field instead of this object.","properties":{"tableName":{"type":"string","description":"The name of the database table into which the bulk insert operation will be executed. This value must correspond to a valid, existing table within the target relational database management system (RDBMS). It serves as the primary destination for inserting multiple rows of data efficiently in a single operation. The table name can include schema or namespace qualifiers if supported by the database (e.g., \"schemaName.tableName\"), allowing precise targeting within complex database structures. Proper validation and sanitization of this value are essential to ensure the operation's success and to prevent SQL injection or other security vulnerabilities.\n\n**Field behavior**\n- Specifies the exact destination table for the bulk insert operation.\n- Must correspond to an existing table in the database schema.\n- Case sensitivity and naming conventions depend on the underlying RDBMS.\n- Supports schema-qualified names where applicable.\n- Used directly in the SQL `INSERT INTO` statement.\n- Influences how data is mapped and inserted during the bulk operation.\n\n**Implementation guidance**\n- Verify the existence and accessibility of the table in the target database before execution.\n- Ensure compliance with the RDBMS naming rules, including reserved keywords, allowed characters, and maximum length.\n- Support and correctly handle schema-qualified table names, respecting database-specific syntax.\n- Sanitize and validate input rigorously to prevent SQL injection and other security risks.\n- Apply appropriate quoting or escaping mechanisms based on the RDBMS (e.g., backticks for MySQL, double quotes for PostgreSQL).\n- Consider the impact of case sensitivity, especially when dealing with quoted identifiers.\n\n**Examples**\n- \"users\"\n- \"public.orders\"\n- \"inventory.products\"\n- \"sales_data_2024\"\n- \"dbo.CustomerRecords\"\n\n**Important notes**\n- The target table must exist and have the appropriate schema and permissions to accept bulk inserts.\n- Incorrect, misspelled, or non-existent table names will cause the operation to fail.\n- Case sensitivity varies by database; for example, PostgreSQL treats quoted identifiers as case-sensitive.\n- Avoid using unvalidated dynamic input to mitigate security vulnerabilities.\n- Schema qualifiers should be used consistently to avoid ambiguity in multi-schema environments.\n\n**Dependency chain**\n- Relies on the database connection and authentication configuration.\n- Interacts with other bulkInsert parameters such as column mappings and data payload.\n- May be influenced by transaction management, locking mechanisms, and database constraints during the insert.\n- Dependent on the database's metadata for validation and existence checks.\n\n**Techn**"},"batchSize":{"type":"string","description":"The number of records to be inserted into the database in a single batch during a bulk insert operation. This parameter is crucial for optimizing the performance and efficiency of bulk data loading by controlling how many records are grouped together before being sent to the database. Proper tuning of batchSize balances memory consumption, transaction overhead, and throughput, enabling the system to handle large volumes of data efficiently without overwhelming resources or causing timeouts. Adjusting batchSize directly impacts transaction size, network utilization, error handling granularity, and recovery strategies, making it essential to tailor this value based on the specific database capabilities, system resources, and workload characteristics.\n\n**Field behavior**\n- Specifies the exact count of records processed and committed in a single batch during bulk insert operations.\n- Determines the frequency and size of database transactions, influencing overall throughput, latency, and system responsiveness.\n- Larger batch sizes can improve throughput but may increase memory usage, transaction duration, and risk of timeouts or locks.\n- Smaller batch sizes reduce memory footprint and transaction time but may increase the total number of transactions and associated overhead.\n- Defines the scope of error handling, as failures typically affect only the current batch, allowing for partial retries or rollbacks.\n- Controls the granularity of commit points, impacting rollback, recovery, and consistency strategies in case of failures.\n\n**Implementation guidance**\n- Choose a batch size that respects the database’s transaction limits, available system memory, and network conditions.\n- Conduct benchmarking and load testing with different batch sizes to identify the optimal balance for your environment.\n- Monitor system performance continuously and adjust batchSize dynamically if supported, to adapt to varying workloads.\n- Implement comprehensive error handling to manage partial batch failures, including retry logic or compensating transactions.\n- Verify compatibility with database drivers, ORM frameworks, or middleware, which may impose constraints or optimizations on batch sizes.\n- Consider the size, complexity, and serialization overhead of individual records, as larger or more complex records may necessitate smaller batches.\n- Factor in network latency and bandwidth to optimize data transfer efficiency and reduce potential bottlenecks.\n\n**Examples**\n- 1000: Suitable for moderate bulk insert operations, balancing speed and resource consumption effectively.\n- 50000: Ideal for high-throughput environments with ample memory and finely tuned database configurations.\n- 100: Appropriate for systems with limited memory or where minimizing transaction size and duration is critical.\n- 5000: A common default batch size providing a good compromise between performance and resource usage.\n\n**Important**"}}},"bulkLoad":{"type":"object","properties":{"tableName":{"type":"string","description":"The name of the target table within the relational database management system (RDBMS) where the bulk load operation will insert, update, or merge data. This property specifies the exact destination table for the bulk data operation and must correspond to a valid, existing table in the database schema. It can include schema qualifiers if supported by the database (e.g., schema.tableName), and must adhere to the naming conventions and case sensitivity rules of the target RDBMS. Proper specification of this property is critical to ensure data is loaded into the correct location without errors or unintended data modification. Accurate definition of the table name helps maintain data integrity and prevents operational failures during bulk load processes.\n\n**Field behavior**\n- Identifies the specific table in the database to receive the bulk-loaded data.\n- Must reference an existing table accessible with the current database credentials.\n- Supports schema-qualified names where applicable (e.g., schema.tableName).\n- Used as the target for insert, update, or merge operations during bulk load.\n- Case sensitivity and naming conventions depend on the underlying database system.\n- Determines the scope and context of the bulk load operation within the database.\n- Influences transactional behavior, locking, and concurrency controls during the operation.\n\n**Implementation guidance**\n- Verify the target table exists and the executing user has sufficient permissions for data modification.\n- Ensure the table name respects the case sensitivity and naming rules of the RDBMS.\n- Avoid reserved keywords or special characters unless properly escaped or quoted according to database requirements.\n- Support fully qualified names including schema or database prefixes if required to avoid ambiguity.\n- Validate compatibility between the table schema and the incoming data to prevent type mismatches or constraint violations.\n- Consider transactional behavior, locking, and concurrency implications on the target table during bulk load.\n- Test the bulk load operation in a development or staging environment to confirm correct table targeting.\n- Implement error handling to catch and report issues related to invalid or inaccessible table names.\n\n**Examples**\n- \"customers\"\n- \"sales_data_2024\"\n- \"public.orders\"\n- \"inventory.products\"\n- \"dbo.EmployeeRecords\"\n\n**Important notes**\n- Incorrect or misspelled table names will cause the bulk load operation to fail.\n- The target table schema must be compatible with the incoming data structure to avoid errors.\n- Bulk loading may overwrite, append, or merge data depending on the load mode; ensure this aligns with business requirements.\n- Some databases require quoting or escaping of table names, especially if they"},"primaryKeys":{"type":["string","null"],"description":"primaryKeys specifies the list of column names that uniquely identify each record in the target relational database table during a bulk load operation. These keys are essential for maintaining data integrity by enforcing uniqueness constraints and enabling precise identification of rows for operations such as inserts, updates, upserts, or conflict resolution. Properly defining primaryKeys ensures that duplicate records are detected and handled appropriately, preventing data inconsistencies and supporting efficient data merging processes. This property supports both single-column and composite primary keys, requiring the exact column names as defined in the target schema, and plays a critical role in guiding the bulk load mechanism to correctly match and manipulate records based on their unique identifiers.  \n**Field behavior:**  \n- Defines one or more columns that collectively serve as the unique identifier for each row in the table.  \n- Enforces uniqueness constraints during bulk load operations to prevent duplicate entries.  \n- Facilitates detection and resolution of conflicts or duplicates during data insertion or updating.  \n- Influences the behavior of upsert, merge, or conflict resolution mechanisms in the bulk load process.  \n- Ensures that each record can be reliably matched and updated based on the specified keys.  \n- Supports both single-column and composite keys, maintaining the order of columns as per the database schema.  \n**Implementation guidance:**  \n- Include all columns that form the complete primary key, especially for composite keys, maintaining the correct order as defined in the database schema.  \n- Ensure column names exactly match those in the target database, respecting case sensitivity where applicable.  \n- Validate that all specified primary key columns exist in both the source data and the target table schema before initiating the bulk load.  \n- Avoid leaving the primaryKeys list empty when uniqueness enforcement or conflict resolution is required.  \n- Consider the immutability and stability of primary key columns to prevent inconsistencies during repeated load operations.  \n- Confirm that the primary key columns are indexed or constrained appropriately in the target database to optimize performance.  \n**Examples:**  \n- [\"id\"] — single-column primary key.  \n- [\"order_id\", \"product_id\"] — composite primary key consisting of two columns.  \n- [\"user_id\", \"timestamp\"] — composite key combining user identifier and timestamp for uniqueness.  \n- [\"customer_id\", \"account_number\", \"region\"] — multi-column composite primary key.  \n**Important notes:**  \n- Omitting primaryKeys when required can result in data duplication, failed loads, or incorrect conflict handling.  \n- The order of"},"overrideMergeQuery":{"type":"boolean","description":"A custom SQL query string that fully overrides the default merge operation executed during bulk load processes in a relational database management system (RDBMS). This property empowers users to define precise, tailored merge logic that governs how records are inserted, updated, or deleted in the target database table when handling large-scale data operations. By specifying this query, users can implement complex matching conditions, custom conflict resolution strategies, additional filtering criteria, or conditional logic that surpasses the system-generated default behavior, ensuring the bulk load aligns perfectly with specific business rules, data integrity requirements, or performance optimizations.\n\n**Field behavior**\n- When provided, this query completely replaces the standard merge statement used in bulk load operations.\n- Supports detailed customization of merge logic, including custom join conditions, conditional updates, selective inserts, and optional deletes.\n- If omitted, the system automatically generates and executes a default merge query based on the schema, keys, and data mappings.\n- The query must be syntactically compatible with the target RDBMS and support any required parameterization for dynamic data injection.\n- Executed within the transactional context of the bulk load to maintain atomicity, consistency, and rollback capabilities in case of failure.\n- The system expects the query to handle all necessary merge scenarios to avoid partial or inconsistent data states.\n- Overrides apply globally for the bulk load operation, affecting all records processed in that batch.\n\n**Implementation guidance**\n- Validate the custom SQL syntax thoroughly before execution to prevent runtime errors and ensure compatibility with the target RDBMS.\n- Ensure the query comprehensively addresses all required merge operations—insert, update, and optionally delete—to maintain data integrity.\n- Support parameter placeholders or bind variables if the system injects dynamic values during execution, and document their usage clearly.\n- Provide users with clear documentation, templates, or examples outlining the expected query structure, required clauses, and best practices.\n- Implement robust safeguards against SQL injection and other security vulnerabilities when accepting and executing custom queries.\n- Test the custom query extensively in a controlled staging environment to verify correctness, performance, and side effects before deploying to production.\n- Consider transaction isolation levels and locking behavior to avoid deadlocks or contention during bulk load operations.\n- Encourage users to include comprehensive error handling and logging within the query or surrounding execution context to facilitate troubleshooting.\n\n**Examples**\n- A MERGE statement that updates existing records based on a composite primary key and inserts new records when no match is found.\n- A merge query incorporating additional WHERE clauses to exclude certain"}},"description":"Specifies whether bulk loading is enabled for relational database management system (RDBMS) operations, allowing for the efficient insertion of large volumes of data through a single, optimized operation. Enabling bulkLoad significantly improves performance during data import, migration, or batch processing by minimizing the overhead associated with individual row inserts and leveraging database-specific bulk insert mechanisms. This feature is particularly beneficial for initial data loads, large-scale migrations, or periodic batch updates where speed, resource efficiency, and reduced transaction time are critical. Bulk loading may temporarily alter database behavior—such as disabling indexes, constraints, or triggers—to maximize throughput, and often requires elevated permissions and careful management of transactional integrity to ensure data consistency. Proper use of bulkLoad can lead to substantial reductions in processing time and system resource consumption during large data operations.\n\n**Field behavior**\n- When set to true, the system uses optimized bulk loading techniques to insert data in large batches, greatly enhancing throughput and efficiency.\n- When false or omitted, data insertion defaults to standard row-by-row operations, which may be slower and consume more resources.\n- Typically enabled during scenarios involving initial data population, large batch imports, or data migration processes.\n- May temporarily disable or defer enforcement of indexes, constraints, and triggers to improve performance during the bulk load operation.\n- Can affect database locking and concurrency, potentially locking tables or partitions for the duration of the bulk load.\n- Bulk loading operations may bypass certain transactional controls, affecting rollback and error recovery behavior.\n\n**Implementation guidance**\n- Verify that the target RDBMS supports bulk loading and understand its specific syntax, capabilities, and limitations.\n- Assess transaction management implications, as bulk loading may alter or bypass triggers, constraints, and rollback mechanisms.\n- Implement robust error handling and post-load validation to ensure data integrity and consistency after bulk operations.\n- Monitor system resources such as CPU, memory, and I/O throughput during bulk load to avoid performance bottlenecks or outages.\n- Plan for potential impacts on database availability, locking behavior, and concurrent access during bulk load execution.\n- Ensure data is properly staged and preprocessed to meet the format and requirements of the bulk loading mechanism.\n- Coordinate bulk loading with maintenance windows or low-traffic periods to minimize disruption.\n\n**Examples**\n- `bulkLoad: true` — Enables bulk loading to accelerate insertion of large datasets efficiently.\n- `bulkLoad: false` — Disables bulk loading, performing inserts using standard row-by-row methods.\n- `bulkLoad` omitted — Defaults"},"updateLookupName":{"type":"string","description":"Specifies the exact name of the lookup table or entity within the relational database management system (RDBMS) that is targeted for update operations. This property serves as a precise identifier to determine which lookup data set should be modified, ensuring accurate and efficient targeting within the database schema. It typically corresponds to a physical table name or a logical entity name defined in the database and must strictly align with the existing schema to enable successful updates without errors or unintended side effects. Proper use of this property is critical for maintaining data integrity during update operations on lookup data, as it directly influences which records are affected. Accurate specification helps prevent accidental data corruption and supports clear, maintainable database interactions.\n\n**Field behavior**\n- Defines the specific lookup table or entity to be updated during an operation.\n- Acts as a key identifier for the update process to locate the correct data set.\n- Must be provided when performing update operations on lookup data.\n- Influences the scope and effect of the update by specifying the target entity.\n- Changes to this value directly affect which data is modified in the database.\n- Invalid or missing values will cause update operations to fail or target unintended data.\n\n**Implementation guidance**\n- Ensure the value exactly matches the existing lookup table or entity name in the database schema.\n- Validate against the RDBMS naming conventions, including allowed characters, length restrictions, and case sensitivity.\n- Maintain consistent naming conventions across the application to prevent ambiguity and errors.\n- Account for case sensitivity based on the underlying database system’s configuration and collation settings.\n- Avoid using reserved keywords, special characters, or whitespace that may cause conflicts or syntax errors.\n- Implement validation checks to catch misspellings or invalid names before executing update operations.\n- Incorporate error handling to manage cases where the specified lookup name does not exist or is inaccessible.\n\n**Examples**\n- \"country_codes\"\n- \"user_roles\"\n- \"product_categories\"\n- \"status_lookup\"\n- \"department_list\"\n\n**Important notes**\n- Providing an incorrect or misspelled name will result in failed update operations or runtime errors.\n- This field must not be left empty when an update operation is intended.\n- It is only relevant and required when performing update operations on lookup tables or entities.\n- Changes to this value should be carefully managed to avoid unintended data modifications or corruption.\n- Ensure appropriate permissions exist for updating the specified lookup entity to prevent authorization failures.\n- Consistent use of this property across environments (development, staging, production) is essential"},"updateExtract":{"type":"string","description":"Specifies the comprehensive configuration and parameters for updating an existing data extract within a relational database management system (RDBMS). This property defines how the extract's data is modified, supporting various update strategies such as incremental updates, full refreshes, conditional changes, or merges. It ensures that the extract remains accurate, consistent, and aligned with business rules and data integrity requirements by controlling the scope, method, and transactional behavior of update operations. Additionally, it allows for fine-grained control through filters, timestamps, and criteria to selectively update portions of the extract, while managing concurrency and maintaining data consistency throughout the process.\n\n**Field behavior**\n- Determines the update strategy applied to an existing data extract, including incremental, full refresh, conditional, or merge operations.\n- Controls how new data interacts with existing extract data—whether by overwriting, appending, or merging.\n- Supports selective updates using filters, timestamps, or conditional criteria to target specific subsets of data.\n- Manages transactional integrity to ensure updates are atomic, consistent, isolated, and durable (ACID-compliant).\n- Coordinates update execution to prevent conflicts, data corruption, or partial updates.\n- Enables configuration of error handling, logging, and rollback mechanisms during update processes.\n- Handles concurrency control to avoid race conditions and ensure data consistency in multi-user environments.\n- Allows scheduling and triggering of update operations based on time, events, or external signals.\n\n**Implementation guidance**\n- Ensure update configurations comply with organizational data governance, security, and integrity policies.\n- Validate all input parameters and update conditions to avoid inconsistent or partial data modifications.\n- Implement robust transactional support to allow rollback on failure and maintain data consistency.\n- Incorporate detailed logging and error reporting to facilitate monitoring, auditing, and troubleshooting.\n- Optimize update methods based on data volume, change frequency, and performance requirements, balancing between incremental and full refresh approaches.\n- Tailor update logic to leverage specific capabilities and constraints of the target RDBMS, including locking and concurrency controls.\n- Coordinate update timing and execution with downstream systems and data consumers to minimize disruption.\n- Design update processes to be idempotent where possible to support safe retries and recovery.\n- Consider the impact of update latency on data freshness and downstream analytics.\n\n**Examples**\n- Configuring an incremental update that uses a last-modified timestamp column to append only new or changed records to the extract.\n- Defining a full refresh update that completely replaces the existing extract data with a newly extracted dataset.\n- Setting up"},"ignoreLookupName":{"type":"string","description":"Specifies whether the lookup name should be ignored during relational database management system (RDBMS) operations, such as query generation, data retrieval, or relationship resolution. When enabled, this flag instructs the system to bypass the use of the lookup name, which can alter how relationships, joins, or references are resolved within database queries. This behavior is particularly useful in scenarios where the lookup name is redundant, introduces unnecessary complexity, or causes performance overhead. Additionally, it allows for alternative identification methods to be prioritized over the lookup name, enabling more flexible or optimized query strategies.\n\n**Field behavior**\n- When set to true, the system excludes the lookup name from all relevant database operations, including query construction, join conditions, and filtering criteria.\n- When false or omitted, the lookup name is actively utilized to resolve references, enforce relationships, and optimize data retrieval.\n- Determines whether explicit lookup names override or supplement default naming conventions, schema-based identifiers, or inferred relationships.\n- Affects how related data is fetched, potentially influencing join strategies, query plans, and lookup optimizations.\n- Impacts the generation of SQL or other query languages by controlling the inclusion of lookup name references.\n\n**Implementation guidance**\n- Enable this flag to enhance query performance by skipping unnecessary lookup name resolution when it is known to be non-essential or redundant.\n- Thoroughly evaluate the impact on data integrity and correctness to ensure that ignoring the lookup name does not result in incomplete, inaccurate, or inconsistent query results.\n- Validate all downstream processes, components, and integrations that depend on lookup names to prevent breaking dependencies or causing data inconsistencies.\n- Consider the underlying database schema design, naming conventions, and relationship mappings before enabling this flag to avoid unintended side effects.\n- Integrate this flag within query builders, ORM layers, data access modules, or middleware to conditionally include or exclude lookup names during query generation and execution.\n- Implement comprehensive testing and monitoring to detect any adverse effects on application behavior or data retrieval accuracy when this flag is toggled.\n\n**Examples**\n- `ignoreLookupName: true` — The system bypasses the lookup name, generating queries without referencing it, which may simplify query logic and improve execution speed.\n- `ignoreLookupName: false` — The lookup name is included in query logic, ensuring that relationships and references are resolved using the defined lookup identifiers.\n- Omission of the property defaults to `false`, meaning lookup names are considered and used unless explicitly ignored.\n\n**Important notes**\n- Ign"},"ignoreExtract":{"type":"string","description":"Specifies whether the extraction process from the relational database management system (RDBMS) should be entirely skipped. When set to true, the system bypasses the data extraction step, which is particularly useful in scenarios where data extraction is managed externally, has already been completed, or is unnecessary for the current operation. This flag directly controls whether the extraction engine initiates the retrieval of data from the source database, thereby influencing all subsequent stages that depend on the extracted data, such as transformation, validation, and loading. Proper use of this flag ensures flexibility in workflows by allowing integration with external data pipelines or pre-extracted datasets without redundant extraction efforts.\n\n**Field behavior**\n- Determines if the data extraction phase from the RDBMS is executed or omitted.\n- When true, no data is pulled from the source database, effectively skipping extraction.\n- When false or omitted, the extraction process runs normally, retrieving data as configured.\n- Influences downstream processes such as data transformation, validation, and loading that depend on extracted data.\n- Must be explicitly set to true to skip extraction; otherwise, extraction proceeds by default.\n- Impacts the overall data pipeline flow by potentially altering the availability of fresh data.\n\n**Implementation guidance**\n- Default value should be false to ensure extraction occurs unless intentionally overridden.\n- Validate input strictly as a boolean to prevent misconfiguration.\n- Ensure that skipping extraction does not cause failures or data inconsistencies in subsequent pipeline stages.\n- Use this flag in workflows where extraction is handled outside the current system or when working with pre-extracted datasets.\n- Incorporate checks or safeguards to confirm that necessary data is available from alternative sources when extraction is skipped.\n- Log or notify when extraction is skipped to maintain transparency in data processing workflows.\n- Coordinate with other system components to handle scenarios where extraction is bypassed, ensuring smooth pipeline execution.\n\n**Examples**\n- `ignoreExtract: true` — Extraction step is completely bypassed.\n- `ignoreExtract: false` — Extraction step is performed as usual.\n- Field omitted — Defaults to false, so extraction occurs normally.\n- Used in a pipeline where data is pre-loaded from a file or external system, setting `ignoreExtract: true` to avoid redundant extraction.\n\n**Important notes**\n- Setting this flag to true assumes that the system has access to the required data through other means; otherwise, downstream processes may fail or produce incomplete results.\n- Incorrect use can lead to missing data, causing errors or inconsistencies in the data pipeline."}}},"S3-2":{"type":"object","description":"Configuration for S3 exports","properties":{"region":{"type":"string","description":"The AWS region where the S3 bucket is physically located, representing the specific geographical area within AWS's global infrastructure that hosts the bucket. This designation directly affects data access latency, availability, redundancy, and compliance with regional data governance, privacy laws, and residency requirements. Selecting the appropriate region is critical for optimizing performance, minimizing costs related to data transfer and storage, and ensuring adherence to legal and organizational policies. The region must be specified using a valid AWS region identifier that accurately corresponds to the bucket's actual location to avoid connectivity issues, authentication failures, and improper routing of API requests. Proper region selection also influences disaster recovery strategies, service availability, and integration with other AWS services, making it a foundational parameter in S3 bucket configuration and management.\n\n**Field behavior**\n- Defines the physical and logical location of the S3 bucket within AWS's global network.\n- Directly impacts data access speed, latency, and throughput.\n- Determines compliance with regional data sovereignty, privacy, and security regulations.\n- Must be a valid AWS region code that matches the bucket’s deployment location.\n- Influences availability zones, fault tolerance, and disaster recovery planning.\n- Affects endpoint URL construction and API request routing.\n- Plays a role in cost optimization by affecting storage pricing and data transfer fees.\n- Governs integration capabilities with other AWS services available in the selected region.\n\n**Implementation guidance**\n- Use official AWS region codes such as \"us-east-1\", \"eu-west-2\", or \"ap-southeast-2\" as defined by AWS.\n- Validate the region value against the current list of supported AWS regions to prevent misconfiguration.\n- Ensure the specified region exactly matches the bucket’s physical location to avoid access and authentication errors.\n- Consider cost implications including storage pricing, data transfer fees, and cross-region replication expenses.\n- When migrating or replicating buckets across regions, update this property accordingly and plan for data transfer and downtime.\n- Verify service availability and feature support in the chosen region before deployment.\n- Regularly review AWS region updates and deprecations to maintain compatibility.\n- Use region-specific endpoints when constructing API requests to ensure proper routing.\n\n**Examples**\n- us-east-1 (Northern Virginia, USA)\n- eu-central-1 (Frankfurt, Germany)\n- ap-southeast-2 (Sydney, Australia)\n- sa-east-1 (São Paulo, Brazil)\n- ca-central-1 (Canada Central)\n\n**Important notes**\n- Incorrect or mismatched region specification"},"bucket":{"type":"string","description":"The name of the Amazon S3 bucket where data or objects are stored or will be stored. This bucket serves as the primary container within the AWS S3 service, uniquely identifying the storage location for your data across all AWS accounts globally. The bucket name must strictly comply with AWS naming conventions, including being DNS-compliant, entirely lowercase, and free of underscores, uppercase letters, or IP address-like formats, ensuring global uniqueness and compatibility. It is essential that the specified bucket already exists and that the user or service has the necessary permissions to access, upload, or modify its contents. Selecting the appropriate bucket, particularly with regard to its AWS region, can significantly impact application performance, cost efficiency, and adherence to data residency and regulatory requirements.\n\n**Field behavior**\n- Specifies the target S3 bucket for all data storage and retrieval operations.\n- Must reference an existing bucket with proper access permissions.\n- Acts as a globally unique identifier within the AWS S3 ecosystem.\n- Immutable after initial assignment; changing the bucket requires specifying a new bucket name.\n- Influences data locality, latency, cost, and compliance based on the bucket’s region.\n\n**Implementation guidance**\n- Validate bucket names against AWS S3 naming rules: lowercase letters, numbers, and hyphens only; no underscores, uppercase letters, or IP address formats.\n- Confirm the bucket’s existence and verify access permissions before performing operations.\n- Consider the bucket’s AWS region to optimize latency, cost, and compliance with data governance policies.\n- Implement comprehensive error handling for scenarios such as bucket not found, access denied, or insufficient permissions.\n- Align bucket selection with organizational policies and regulatory requirements concerning data storage and residency.\n\n**Examples**\n- \"my-app-data-bucket\"\n- \"user-uploads-2024\"\n- \"company-backups-eu-west-1\"\n\n**Important notes**\n- Bucket names are globally unique across all AWS accounts and cannot be reused or renamed once created.\n- Access control is managed through AWS IAM roles, policies, and bucket policies, which must be correctly configured.\n- The bucket’s region selection is critical for optimizing performance, minimizing costs, and ensuring legal compliance.\n- Bucket naming restrictions prohibit uppercase letters, underscores, and IP address-like formats to maintain DNS compliance.\n\n**Dependency chain**\n- Depends on the availability and stability of the AWS S3 service.\n- Requires valid AWS credentials with appropriate permissions to access or modify the bucket.\n- Typically used alongside object keys, region configurations,"},"fileKey":{"type":"string","description":"The unique identifier or path string that specifies the exact location of a file stored within an Amazon S3 bucket. This key acts as the primary reference for locating, accessing, managing, and manipulating a specific file within the storage system. It can be a simple filename or a hierarchical path that mimics folder structures to logically organize files within the bucket. The fileKey is case-sensitive and must be unique within the bucket to prevent conflicts or overwriting. It supports UTF-8 encoded characters and can include delimiters such as slashes (\"/\") to simulate directories, enabling structured file organization. Proper encoding and adherence to AWS naming conventions are essential to ensure seamless integration with S3 APIs and web requests. Additionally, the fileKey plays a critical role in access control, lifecycle policies, and event notifications within the S3 environment.\n\n**Field behavior**\n- Uniquely identifies a file within the S3 bucket namespace.\n- Serves as the primary reference in all file-related operations such as retrieval, update, deletion, and metadata management.\n- Case-sensitive and must be unique to avoid overwriting or conflicts.\n- Supports folder-like delimiters (e.g., slashes) to simulate directory structures for logical organization.\n- Used as a key parameter in all relevant S3 API operations, including GetObject, PutObject, DeleteObject, and CopyObject.\n- Influences access permissions and lifecycle policies when used with prefixes or specific key patterns.\n\n**Implementation guidance**\n- Use a UTF-8 encoded string that accurately reflects the file’s intended path or name.\n- Avoid unsupported or problematic characters; strictly follow S3 key naming conventions to prevent errors.\n- Incorporate logical folder structures using delimiters for better organization and maintainability.\n- Ensure URL encoding when the key is included in web requests or URLs to handle special characters properly.\n- Validate that the key length does not exceed S3’s maximum allowed size (up to 1024 bytes).\n- Consider the impact of key naming on access control policies, lifecycle rules, and event triggers.\n\n**Examples**\n- \"documents/report2024.pdf\"\n- \"images/profile/user123.png\"\n- \"backups/2023/12/backup.zip\"\n- \"videos/2024/events/conference.mp4\"\n- \"logs/2024/06/15/server.log\"\n\n**Important notes**\n- The fileKey is case-sensitive; for example, \"File.txt\" and \"file.txt\" are distinct keys.\n- Changing the fileKey"},"backupBucket":{"type":"string","description":"The name of the Amazon S3 bucket designated for securely storing backup data. This bucket serves as the primary repository for saving copies of critical information, ensuring data durability, integrity, and availability for recovery in the event of data loss, corruption, or disaster. It must be a valid, existing bucket within the AWS environment, configured with appropriate permissions and security settings to facilitate reliable and efficient backup operations. Proper configuration of this bucket—including enabling versioning to preserve historical backup states, applying encryption to protect data at rest, and setting lifecycle policies to manage storage costs and data retention—is essential to optimize backup management, maintain compliance with organizational and regulatory requirements, and control expenses. The bucket should ideally reside in the same AWS region as the backup source to minimize latency and data transfer costs, and may incorporate cross-region replication for enhanced disaster recovery capabilities. Additionally, the bucket should be monitored regularly for access patterns, storage usage, and security compliance to ensure ongoing protection and operational efficiency.\n\n**Field behavior**\n- Specifies the exact S3 bucket where backup data will be written and stored.\n- Must reference a valid, existing bucket within the AWS account and region.\n- Used exclusively during backup processes to save data snapshots or incremental backups.\n- Requires appropriate write permissions to allow backup services to upload data.\n- Typically remains static unless backup storage strategies or requirements change.\n- Supports integration with backup scheduling and monitoring systems.\n- Facilitates data recovery by maintaining backup copies in a secure, durable location.\n\n**Implementation guidance**\n- Verify that the bucket name adheres to AWS S3 naming conventions and is DNS-compliant.\n- Confirm the bucket exists and is accessible by the backup service or user.\n- Set up IAM roles and bucket policies to grant necessary write and read permissions securely.\n- Enable bucket versioning to maintain historical backup versions and support data recovery.\n- Implement lifecycle policies to manage storage costs by archiving or deleting old backups.\n- Apply encryption (e.g., AWS KMS) to protect backup data at rest and meet compliance requirements.\n- Consider enabling cross-region replication for disaster recovery scenarios.\n- Regularly audit bucket permissions and access logs to ensure security compliance.\n- Monitor storage usage and costs to optimize resource allocation and prevent unexpected charges.\n- Align bucket configuration with organizational policies for data retention, security, and compliance.\n\n**Examples**\n- \"my-app-backups\"\n- \"company-data-backup-2024\"\n- \"prod-environment-backup-bucket\"\n- \"s3-backups-region1"},"serverSideEncryptionType":{"type":"string","description":"serverSideEncryptionType specifies the type of server-side encryption applied to objects stored in the S3 bucket, determining how data at rest is securely protected by the storage service. This property defines the encryption algorithm or method used by the server to automatically encrypt data upon storage, ensuring confidentiality, integrity, and compliance with security standards. It accepts predefined values that correspond to supported encryption mechanisms, influencing how data is encrypted, accessed, and managed during storage and retrieval operations. By configuring this property, users can enforce encryption policies that safeguard sensitive information without requiring client-side encryption management, thereby simplifying security administration and enhancing data protection.\n\n**Field behavior**\n- Defines the encryption algorithm or method used by the server to encrypt data at rest.\n- Controls whether server-side encryption is enabled or disabled for stored objects.\n- Accepts specific predefined values corresponding to supported encryption types such as AES256 or aws:kms.\n- Influences data handling during storage and retrieval to maintain data confidentiality and integrity.\n- Applies encryption automatically without requiring client-side encryption management.\n- Determines compliance with organizational and regulatory encryption requirements.\n- Affects how encryption keys are managed, accessed, and audited.\n- Impacts access control and permissions related to encrypted objects.\n- May affect performance and cost depending on the encryption method chosen.\n\n**Implementation guidance**\n- Validate the input value against supported encryption types, primarily \"AES256\" and \"aws:kms\".\n- Ensure the selected encryption type is compatible with the S3 bucket’s configuration, policies, and permissions.\n- When using \"aws:kms\", specify the appropriate AWS KMS key ID if required, and verify key permissions.\n- Gracefully handle scenarios where encryption is not specified, disabled, or set to an empty value.\n- Provide clear and actionable error messages if an unsupported or invalid encryption type is supplied.\n- Consider the impact on performance and cost when enabling encryption, especially with KMS-managed keys.\n- Document encryption settings clearly for users to understand the security posture of stored data.\n- Implement fallback or default behaviors when encryption settings are omitted.\n- Ensure that encryption settings align with audit and compliance reporting requirements.\n- Coordinate with key management policies to maintain secure key lifecycle and rotation.\n- Test encryption settings thoroughly to confirm data is encrypted and accessible as expected.\n\n**Examples**\n- \"AES256\" — Server-side encryption using Amazon S3-managed encryption keys (SSE-S3).\n- \"aws:kms\" — Server-side encryption using AWS Key Management Service-managed keys (SSE-KMS"}}},"Wrapper-2":{"type":"object","description":"Configuration for Wrapper exports","properties":{"function":{"type":"string","description":"Specifies the exact name of the function to be invoked within the wrapper context, enabling dynamic and flexible execution of different functionalities based on the provided value. This property acts as a critical key identifier that directs the wrapper to call a specific function, facilitating modular, reusable, and maintainable code design. The function name must correspond to a valid, accessible, and callable function within the wrapper's scope, ensuring that the intended operation is performed accurately and efficiently. By allowing the function to be selected at runtime, this mechanism supports dynamic behavior, adaptable workflows, and context-sensitive execution paths, enhancing the overall flexibility and scalability of the system.\n\n**Field behavior**\n- Determines which specific function the wrapper will execute during its operation.\n- Accepts a string value representing the exact name of the target function.\n- Must match a valid and accessible function within the wrapper’s execution context.\n- Influences the wrapper’s behavior and output based on the selected function.\n- Supports dynamic assignment to enable flexible and context-dependent function calls.\n- Enables runtime selection of functionality, allowing for versatile and adaptive processing.\n\n**Implementation guidance**\n- Validate that the provided function name exists and is callable within the wrapper environment before invocation.\n- Ensure the function name is supplied as a string and adheres to the naming conventions used in the wrapper’s codebase.\n- Implement robust error handling to gracefully manage cases where the specified function does not exist, is inaccessible, or fails during execution.\n- Sanitize the input to prevent injection attacks, code injection, or other security vulnerabilities.\n- Allow dynamic updates to this property to support runtime changes in function execution without requiring system restarts.\n- Log function invocation attempts and outcomes for auditing and debugging purposes.\n- Consider implementing a whitelist or registry of allowed function names to enhance security and control.\n\n**Examples**\n- \"initialize\"\n- \"processData\"\n- \"renderOutput\"\n- \"cleanupResources\"\n- \"fetchUserDetails\"\n- \"validateInput\"\n- \"exportReport\"\n- \"authenticateUser\"\n\n**Important notes**\n- The function name is case-sensitive and must precisely match the function identifier in the underlying implementation.\n- Misspelled or incorrect function names will cause invocation errors or runtime failures.\n- This property specifies only the function name; any parameters or arguments for the function should be provided separately.\n- The wrapper context must be properly initialized and configured to recognize and execute the specified function.\n- Changes to this property can significantly alter the behavior of the wrapper, so modifications should be managed carefully"},"configuration":{"type":"object","description":"The `configuration` property defines a comprehensive and centralized set of parameters, settings, and operational directives that govern the behavior, functionality, and characteristics of the wrapper component. It acts as the primary container for all customizable options that influence how the wrapper initializes, runs, and adapts to various deployment environments or user requirements. This includes environment-specific configurations, feature toggles, resource limits, integration endpoints, security credentials, logging preferences, and other critical controls. By encapsulating these diverse settings, the configuration property enables fine-grained control over the wrapper’s runtime behavior, supports dynamic adaptability, and facilitates consistent management across different scenarios.\n\n**Field behavior**\n- Encapsulates all relevant configurable options affecting the wrapper’s operation and lifecycle.\n- Supports complex nested structures or key-value pairs to represent hierarchical and interrelated settings.\n- Can be mandatory or optional depending on the wrapper’s design and operational context.\n- Changes to configuration typically influence initialization, runtime behavior, and potentially require component restart or reinitialization.\n- May support dynamic updates if the wrapper is designed for live reconfiguration without downtime.\n- Acts as a single source of truth for operational parameters, ensuring consistency and predictability.\n\n**Implementation guidance**\n- Organize configuration parameters logically, grouping related settings to improve readability and maintainability.\n- Implement robust validation mechanisms to enforce correct data types, value ranges, and adherence to constraints.\n- Provide sensible default values for optional parameters to enhance usability and prevent misconfiguration.\n- Design the configuration schema to be extensible and backward-compatible, allowing seamless future enhancements.\n- Secure sensitive information within the configuration using encryption, secure storage, or exclusion from logs and error messages.\n- Thoroughly document each configuration option, including its purpose, valid values, default behavior, and impact on the wrapper.\n- Consider environment-specific overrides or profiles to facilitate deployment flexibility.\n- Ensure configuration changes are atomic and consistent to avoid partial or invalid states.\n\n**Examples**\n- `{ \"timeout\": 5000, \"enableLogging\": true, \"maxRetries\": 3 }`\n- `{ \"environment\": \"production\", \"apiEndpoint\": \"https://api.example.com\", \"features\": { \"beta\": false } }`\n- Nested objects specifying database connection details, authentication credentials, UI themes, third-party service integrations, or resource limits.\n- Feature flags enabling or disabling experimental capabilities dynamically.\n- Security parameters such as OAuth tokens, encryption keys, or access control lists.\n\n**Important notes**\n- Comprehensive and clear documentation of all"},"lookups":{"type":"object","properties":{"type":{"type":"string","description":"type: >\n  Specifies the category or classification of the lookup within the wrapper context.\n  This property defines the nature or kind of lookup being performed or referenced.\n  **Field behavior**\n  - Determines the specific type of lookup operation or data classification.\n  - Influences how the lookup data is processed or interpreted.\n  - May restrict or enable certain values or options based on the type selected.\n  **Implementation guidance**\n  - Use clear and consistent naming conventions for different types.\n  - Validate the value against a predefined set of allowed types to ensure data integrity.\n  - Ensure that the type aligns with the corresponding lookup logic or data source.\n  **Examples**\n  - \"userRole\"\n  - \"productCategory\"\n  - \"statusCode\"\n  - \"regionCode\"\n  **Important notes**\n  - The type value is critical for correctly resolving and handling lookup data.\n  - Changing the type may affect downstream processing or data retrieval.\n  - Ensure compatibility with other related fields or components that depend on this type.\n  **Dependency chain**\n  - Depends on the wrapper context to provide scope.\n  - Influences the selection and retrieval of lookup values.\n  - May be linked to validation schemas or business logic modules.\n  **Technical details**\n  - Typically represented as a string.\n  - Should conform to a controlled vocabulary or enumeration where applicable.\n  - Case sensitivity may apply depending on implementation."},"_lookupCacheId":{"type":"string","format":"objectId","description":"_lookupCacheId: Identifier used to reference a specific cache entry for lookup operations within the wrapper's lookup mechanism.\n**Field behavior**\n- Serves as a unique identifier for cached lookup data.\n- Used to retrieve or update cached lookup results efficiently.\n- Helps in minimizing redundant lookup operations by reusing cached data.\n- Typically remains constant for the lifespan of the cached data entry.\n**Implementation guidance**\n- Should be generated to ensure uniqueness within the scope of the wrapper's lookups.\n- Must be consistent and stable to correctly reference the intended cache entry.\n- Should be invalidated or updated when the underlying lookup data changes.\n- Can be a string or numeric identifier depending on the caching system design.\n**Examples**\n- \"userRolesCache_v1\"\n- \"productLookupCache_2024_06\"\n- \"regionCodesCache_42\"\n**Important notes**\n- Proper management of _lookupCacheId is critical to avoid stale or incorrect lookup data.\n- Changing the _lookupCacheId without updating the cache content can lead to lookup failures.\n- Should be used in conjunction with cache expiration or invalidation strategies.\n**Dependency chain**\n- Depends on the wrapper.lookups system to manage cached lookup data.\n- May interact with cache storage or memory management components.\n- Influences lookup operation performance and data consistency.\n**Technical details**\n- Typically implemented as a string identifier.\n- May be used as a key in key-value cache stores.\n- Should be designed to avoid collisions with other cache identifiers.\n- May include versioning or timestamp information to manage cache lifecycle."},"extract":{"type":"string","description":"Extract: Specifies the extraction criteria or pattern used to retrieve specific data from a given input within the lookup process.\n**Field behavior**\n- Defines the rules or patterns to identify and extract relevant information from input data.\n- Can be a string, regular expression, or structured query depending on the implementation.\n- Used during the lookup operation to isolate the desired subset of data.\n- May support multiple extraction patterns or conditional extraction logic.\n**Implementation guidance**\n- Ensure the extraction pattern is precise to avoid incorrect or partial data retrieval.\n- Validate the extraction criteria against sample input data to confirm accuracy.\n- Support common pattern formats such as regex for flexible and powerful extraction.\n- Provide clear error handling if the extraction pattern fails or returns no results.\n- Allow for optional extraction parameters to customize behavior (e.g., case sensitivity).\n**Examples**\n- A regex pattern like `\"\\d{4}-\\d{2}-\\d{2}\"` to extract dates in YYYY-MM-DD format.\n- A JSONPath expression such as `$.store.book[*].author` to extract authors from a JSON object.\n- A simple substring like `\"ERROR:\"` to extract error messages from logs.\n- XPath query to extract nodes from XML data.\n**Important notes**\n- The extraction logic directly impacts the accuracy and relevance of the lookup results.\n- Complex extraction patterns may affect performance; optimize where possible.\n- Extraction should handle edge cases such as missing fields or unexpected data formats gracefully.\n- Document supported extraction pattern types and syntax clearly for users.\n**Dependency chain**\n- Depends on the input data format and structure to define appropriate extraction criteria.\n- Works in conjunction with the lookup mechanism that applies the extraction.\n- May interact with validation or transformation steps post-extraction.\n**Technical details**\n- Typically implemented using pattern matching libraries or query language parsers.\n- May require escaping or encoding special characters within extraction patterns.\n- Should support configurable options like multiline matching or case sensitivity.\n- Extraction results are usually returned as strings, arrays, or structured objects depending on context."},"map":{"type":"object","description":"map: >\n  A mapping object that defines key-value pairs used for lookup operations within the wrapper context.\n  **Field behavior**\n  - Serves as a dictionary or associative array for translating or mapping input keys to corresponding output values.\n  - Used to customize or override default lookup values dynamically.\n  - Supports string keys and values, but may also support other data types depending on implementation.\n  **Implementation guidance**\n  - Ensure keys are unique within the map to avoid conflicts.\n  - Validate that values conform to expected formats or types required by the lookup logic.\n  - Consider immutability or controlled updates to prevent unintended side effects during runtime.\n  - Provide clear error handling for missing or invalid keys during lookup operations.\n  **Examples**\n  - {\"US\": \"United States\", \"CA\": \"Canada\", \"MX\": \"Mexico\"}\n  - {\"error404\": \"Not Found\", \"error500\": \"Internal Server Error\"}\n  - {\"env\": \"production\", \"version\": \"1.2.3\"}\n  **Important notes**\n  - The map is context-specific and should be populated according to the needs of the wrapper's lookup functionality.\n  - Large maps may impact performance; optimize size and access patterns accordingly.\n  - Changes to the map may require reinitialization or refresh of dependent components.\n  **Dependency chain**\n  - Depends on the wrapper.lookups context for proper integration.\n  - May be referenced by other properties or methods performing lookup operations.\n  **Technical details**\n  - Typically implemented as a JSON object or dictionary data structure.\n  - Keys and values are usually strings but can be extended to other serializable types.\n  - Should support efficient retrieval, ideally O(1) time complexity for lookups."},"default":{"type":["object","null"],"description":"The default property specifies the fallback or initial value to be used within the wrapper's lookups configuration when no other specific value is provided or applicable. This ensures that the system has a predefined baseline value to operate with, preventing errors or undefined behavior.\n\n**Field behavior**\n- Acts as the fallback value in the lookup process.\n- Used when no other matching lookup value is found.\n- Provides a baseline or initial value for the wrapper configuration.\n- Ensures consistent behavior by avoiding null or undefined states.\n\n**Implementation guidance**\n- Define a sensible and valid default value that aligns with the expected data type and usage context.\n- Ensure the default value is compatible with other dependent fields or processes.\n- Update the default value cautiously, as it impacts the fallback behavior system-wide.\n- Validate the default value during configuration loading to prevent runtime errors.\n\n**Examples**\n- A string value such as \"N/A\" or \"unknown\" for textual lookups.\n- A numeric value like 0 or -1 for numerical lookups.\n- A boolean value such as false when a true/false default is needed.\n- An object or dictionary representing a default configuration set.\n\n**Important notes**\n- The default value should be meaningful and appropriate to avoid misleading results.\n- Overriding the default value may affect downstream logic relying on fallback behavior.\n- Absence of a default value might lead to errors or unexpected behavior in the lookup process.\n- Consider localization or context-specific defaults if applicable.\n\n**Dependency chain**\n- Used by the wrapper.lookups mechanism during value resolution.\n- May influence or be influenced by other lookup-related properties or configurations.\n- Interacts with error handling or fallback logic in the system.\n\n**Technical details**\n- Data type should match the expected type of lookup values (string, number, boolean, object, etc.).\n- Stored and accessed as part of the wrapper's lookup configuration object.\n- Should be immutable during runtime unless explicitly reconfigured.\n- May be serialized/deserialized as part of configuration files or API payloads."},"allowFailures":{"type":"boolean","description":"allowFailures: Indicates whether the system should permit failures during the lookup process without aborting the entire operation.\n**Field behavior**\n- When set to true, individual lookup failures are tolerated, allowing the process to continue.\n- When set to false, any failure in the lookup process causes the entire operation to fail.\n- Helps in scenarios where partial results are acceptable or expected.\n**Implementation guidance**\n- Use this flag to control error handling behavior in lookup operations.\n- Ensure that downstream processes can handle partial or incomplete data if allowFailures is true.\n- Log or track failures when allowFailures is enabled to aid in debugging and monitoring.\n**Examples**\n- allowFailures: true — The system continues processing even if some lookups fail.\n- allowFailures: false — The system stops and reports an error immediately upon a lookup failure.\n**Important notes**\n- Enabling allowFailures may result in incomplete or partial data sets.\n- Use with caution in critical systems where data integrity is paramount.\n- Consider combining with retry mechanisms or fallback strategies.\n**Dependency chain**\n- Depends on the lookup operation within wrapper.lookups.\n- May affect error handling and result aggregation components.\n**Technical details**\n- Boolean value: true or false.\n- Default behavior should be defined explicitly to avoid ambiguity.\n- Should be checked before executing lookup calls to determine error handling flow."},"_id":{"type":"object","description":"Unique identifier for the lookup entry within the wrapper context.\n**Field behavior**\n- Serves as the primary key to uniquely identify each lookup entry.\n- Immutable once assigned to ensure consistent referencing.\n- Used to retrieve, update, or delete specific lookup entries.\n**Implementation guidance**\n- Should be generated using a globally unique identifier (e.g., UUID or ObjectId).\n- Must be indexed in the database for efficient querying.\n- Should be validated to ensure uniqueness within the wrapper.lookups collection.\n**Examples**\n- \"507f1f77bcf86cd799439011\"\n- \"a1b2c3d4e5f6789012345678\"\n**Important notes**\n- This field is mandatory for every lookup entry.\n- Changing the _id after creation can lead to data inconsistency.\n- Should not contain sensitive or personally identifiable information.\n**Dependency chain**\n- Used by other fields or services that reference lookup entries.\n- May be linked to foreign keys or references in related collections or tables.\n**Technical details**\n- Typically stored as a string or ObjectId type depending on the database.\n- Must conform to the format and length constraints of the chosen identifier scheme.\n- Should be generated server-side to prevent collisions."},"cLocked":{"type":"object","description":"cLocked indicates whether the item is currently locked, preventing modifications or deletions.\n**Field behavior**\n- Represents the lock status of an item within the system.\n- When set to true, the item is considered locked and cannot be edited or deleted.\n- When set to false, the item is unlocked and available for modifications.\n- Typically used to enforce data integrity and prevent concurrent conflicting changes.\n**Implementation guidance**\n- Should be a boolean value: true or false.\n- Ensure that any operation attempting to modify or delete the item checks the cLocked status first.\n- Update the cLocked status appropriately when locking or unlocking the item.\n- Consider integrating with user permissions to control who can change the lock status.\n**Examples**\n- cLocked: true (The item is locked and cannot be changed.)\n- cLocked: false (The item is unlocked and editable.)\n**Important notes**\n- Locking an item does not necessarily mean it is read-only; it depends on the system's enforcement.\n- The lock status should be clearly communicated to users to avoid confusion.\n- Changes to cLocked should be logged for audit purposes.\n**Dependency chain**\n- May depend on user roles or permissions to set or unset the lock.\n- Other fields or operations may check cLocked before proceeding.\n**Technical details**\n- Data type: Boolean.\n- Default value is typically false unless specified otherwise.\n- Stored as part of the lookup or wrapper object in the data model.\n- Should be efficiently queryable to enforce lock checks during operations."}},"description":"A collection of key-value pairs that serve as a centralized reference dictionary to map or translate specific identifiers, codes, or keys into meaningful, human-readable, or contextually relevant values within the wrapper object. This property facilitates data normalization, standardization, and clarity across various components of the application by providing descriptive labels, metadata, or explanatory information associated with each key. It enhances data interpretation, validation, and presentation by enabling consistent and seamless conversion of coded data into understandable formats, supporting both simple and complex hierarchical mappings as needed.\n\n**Field behavior**\n- Functions as a lookup dictionary to resolve codes or keys into descriptive, user-friendly values.\n- Supports consistent data representation and interpretation throughout the application.\n- Commonly accessed during data processing, validation, or UI rendering to enhance readability.\n- May support hierarchical or nested mappings for complex relationships.\n- Typically remains static or changes infrequently but should reflect the latest valid mappings.\n\n**Implementation guidance**\n- Structure as an object, map, or dictionary with unique, well-defined keys and corresponding descriptive values.\n- Ensure keys are consistent, unambiguous, and conform to expected formats or standards.\n- Values should be concise, clear, and informative to facilitate easy understanding by end-users or systems.\n- Support nested or multi-level lookups if the domain requires hierarchical mapping.\n- Validate lookup data integrity regularly to prevent missing, outdated, or incorrect mappings.\n- Consider immutability or concurrency controls if the lookup data is accessed or modified concurrently.\n- Optimize for efficient retrieval, aiming for constant-time (O(1)) lookup performance.\n\n**Examples**\n- Mapping ISO country codes to full country names, e.g., {\"US\": \"United States\", \"FR\": \"France\"}.\n- Translating HTTP status codes to descriptive messages, e.g., {\"200\": \"OK\", \"404\": \"Not Found\"}.\n- Converting product or SKU identifiers to product names within an inventory management system.\n- Mapping error codes to user-friendly error descriptions.\n- Associating role IDs with role names in an access control system.\n\n**Important notes**\n- Keep the lookup collection current to accurately reflect any updates in codes or their meanings.\n- Avoid including sensitive, confidential, or personally identifiable information within lookup values.\n- Monitor and manage the size of the lookup collection to prevent performance degradation.\n- Ensure thread-safety or appropriate synchronization mechanisms if the lookups are mutable and accessed concurrently.\n- Changes to lookup data can have widespread effects on data interpretation and user interface behavior."}}},"Salesforce-2":{"type":"object","description":"Configuration for Salesforce imports containing operation type, API selection, and object-specific settings.\n\n**IMPORTANT:** This schema defines ONLY the Salesforce-specific configuration properties. Properties like `ignoreExisting`, `ignoreMissing`, `name`, `description` are NOT part of this schema - they belong at a different level.\n\nWhen asked to generate this configuration, return an object with these properties:\n- `operation` (REQUIRED): The Salesforce operation type (insert, update, upsert, etc.)\n- `api` (REQUIRED): The Salesforce API to use (default: \"soap\")\n- `sObjectType`: The Salesforce object type (e.g., \"Account\", \"Contact\", \"Vendor__c\")\n- `idLookup`: Configuration for looking up existing records\n- `upsert`: Configuration for upsert operations (when operation is \"upsert\")\n- And other operation-specific properties as needed","properties":{"lookups":{"type":["string","null"],"description":"A collection of lookup tables or reference data sets that provide standardized, predefined values or options within the application context. These lookup tables enable consistent data entry, validation, and interpretation by defining controlled vocabularies or permissible values for various fields or parameters across the system. They serve as authoritative sources for reference data, ensuring uniformity and reducing errors in data handling and user interactions. Lookup tables typically encompass static or infrequently changing data that supports consistent business logic and user interface behavior across multiple modules or components. By centralizing these reference datasets, the system promotes reusability, simplifies maintenance, and enhances data integrity throughout the application lifecycle.\n\n**Field behavior**\n- Contains multiple named lookup tables, each representing a distinct category or domain of reference data.\n- Used to populate user interface elements such as dropdown menus, autocomplete fields, and selection lists.\n- Supports validation logic by restricting input to predefined permissible values.\n- Typically consists of static or infrequently changing data that underpins consistent data interpretation.\n- May be referenced by multiple components or modules within the application to maintain data consistency.\n- Enables dynamic adaptation of UI and business rules by centralizing reference data management.\n- Facilitates localization by supporting language-specific labels or descriptions where applicable.\n- Often versioned or managed to track changes and ensure backward compatibility.\n\n**Implementation guidance**\n- Structure as a dictionary or map where keys correspond to lookup table names and values are arrays or lists of unique, well-defined entries.\n- Ensure each lookup entry is unique within its table and includes necessary metadata such as codes, descriptions, or display labels.\n- Design for easy updates or extensions to lookup tables without compromising existing data integrity or system stability.\n- Implement caching strategies to optimize performance, especially for client-side consumption.\n- Consider supporting localization and internationalization by including language-specific labels or descriptions where applicable.\n- Incorporate versioning or change management mechanisms to handle updates without disrupting dependent systems.\n- Validate data integrity rigorously during ingestion or updates to prevent duplicates, inconsistencies, or invalid entries.\n- Separate lookup data from business logic to maintain clarity and ease of maintenance.\n- Provide clear documentation for each lookup table to facilitate understanding and correct usage by developers and users.\n- Ensure lookup data is accessible through standardized APIs or interfaces to promote interoperability.\n\n**Examples**\n- A \"countries\" lookup containing standardized country codes (e.g., ISO 3166-1 alpha-2) and their corresponding country names.\n- A \"statusCodes\" lookup defining permissible status values such as \""},"operation":{"type":"string","enum":["insert","update","upsert","upsertpicklistvalues","delete","addupdate"],"description":"Specifies the type of Salesforce operation to perform on records. This determines how data is created, modified, or removed in Salesforce.\n\n**Field behavior**\n- Defines the specific Salesforce action to execute (insert, update, upsert, etc.)\n- Dictates required fields and validation rules for the operation\n- Determines API endpoint and request structure\n- Automatically converted to lowercase before processing\n- Influences response format and error handling\n\n**Operation types**\n\n****insert****\n- **Purpose:** Create new records in Salesforce\n- **Behavior:** Creates new records\n- **Use when:** Creating brand new records\n- **With ignoreExisting: true:** Checks for existing records and skips them (requires idLookup.whereClause)\n- **Without ignoreExisting:** Creates all records without checking for duplicates\n- **Example:** \"Insert new leads from marketing campaign\"\n- **Example with ignoreExisting:** \"Create vendors while ignoring existing vendors\" → operation: \"insert\" + ignoreExisting: true\n\n****update****\n- **Purpose:** Modify existing records in Salesforce\n- **Behavior:** Requires record ID; fails if record doesn't exist\n- **Use when:** Updating known existing records\n- **Example:** \"Update customer status based on order completion\"\n\n****upsert****\n- **Purpose:** Create or update records based on external ID\n- **Behavior:** Updates if external ID matches, creates if not found\n- **Use when:** Syncing data from external systems with external ID field\n- **Example:** \"Upsert customers by External_ID__c field\"\n- **REQUIRES:**\n  - `idLookup.extract` field (maps incoming field to External ID)\n  - `upsert.externalIdField` field (specifies Salesforce External ID field)\n  - Both fields MUST be set for upsert to work\n\n****upsertpicklistvalues****\n- **Purpose:** Create or update picklist values dynamically\n- **Behavior:** Manages picklist value creation/updates\n- **Use when:** Maintaining picklist values from external data\n- **Example:** \"Sync product categories to Salesforce picklist\"\n\n****delete****\n- **Purpose:** Remove records from Salesforce\n- **Behavior:** Requires record ID; records moved to recycle bin\n- **Use when:** Removing obsolete or invalid records\n- **Example:** \"Delete expired opportunities\"\n\n****addupdate****\n- **Purpose:** Create new or update existing records (Celigo-specific)\n- **Behavior:** Similar to upsert but uses different matching logic\n- **Use when:** Upserting without requiring external ID field\n- **Example:** \"Sync contacts, create new or update existing by email\"\n\n**Implementation guidance**\n- Choose operation based on whether records are new, existing, or unknown\n- For **insert**: Ensure records are truly new to avoid errors\n- For **insert with ignoreExisting: true**: MUST set `idLookup.whereClause` to check for existing records\n- For **update/delete**: MUST set `idLookup.whereClause` to identify records\n- For **upsert**: MUST also include the `upsert` object with `externalIdField` property\n- For **upsert**: MUST also set `idLookup.extract` to map incoming field\n- For **addupdate**: MUST set `idLookup.whereClause` to define matching criteria\n- Use **upsertpicklistvalues** only for picklist management scenarios\n\n**Common patterns**\n- **\"Create X while ignoring existing X\"** → operation: \"insert\" + api: \"soap\" + idLookup.whereClause\n- **\"Create or update X\"** → operation: \"upsert\" + api: \"soap\" + upsert.externalIdField + idLookup.extract\n- **\"Update X\"** → operation: \"update\" + api: \"soap\" + idLookup.whereClause\n- **\"Sync X\"** → operation: \"addupdate\" + api: \"soap\" + idLookup.whereClause\n- **ALWAYS include** api field (default to \"soap\" if not specified in prompt)\n\n**Examples**\n\n**Insert new records**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Lead\"\n}\n```\nPrompt: \"Create new leads in Salesforce\"\n\n**Insert with ignore existing**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Vendor__c\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{VendorName}}'\"\n  }\n}\n```\nPrompt: \"Create vendor records while ignoring existing vendors\"\n\n**NOTE:** For \"ignore existing\" prompts, also remember that `ignoreExisting: true` belongs at a different level (not part of this salesforce configuration).\n\n**Update existing records**\n```json\n{\n  \"operation\": \"update\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Account\",\n  \"idLookup\": {\n    \"whereClause\": \"Id = '{{AccountId}}'\"\n  }\n}\n```\nPrompt: \"Update existing accounts in Salesforce\"\n\n**Upsert by external id**\n```json\n{\n  \"operation\": \"upsert\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Contact\",\n  \"idLookup\": {\n    \"extract\": \"ExternalId\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\nPrompt: \"Create or update contacts by External_ID__c\"\n\n**Addupdate (create or update)**\n```json\n{\n  \"operation\": \"addupdate\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Customer__c\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{CustomerName}}'\"\n  }\n}\n```\nPrompt: \"Sync customers, create new or update existing\"\n\n**Important notes**\n- Operation value is case-insensitive (automatically converted to lowercase)\n- Invalid operation values will cause API validation errors\n- Each operation has different requirements for record identification\n- **CRITICAL for upsert**: MUST include the `upsert` object with `externalIdField` property\n- **CRITICAL for upsert**: MUST set `idLookup.extract` to map incoming field to External ID\n- **upsert** requires external ID field to be configured in Salesforce\n- **addupdate** is Celigo-specific and may not work with all Salesforce APIs\n- Operations must align with user's Salesforce permissions\n- Some operations support bulk processing more efficiently than others"},"api":{"type":"string","enum":["soap","rest","metadata","compositerecord"],"description":"**REQUIRED FIELD:** Specifies which Salesforce API to use for the import operation. Different APIs have different capabilities, performance characteristics, and use cases.\n\n**CRITICAL:** This field MUST ALWAYS be set. If no specific API is mentioned in the prompt, **DEFAULT to \"soap\"**.\n\n**Field behavior**\n- **REQUIRED** for all Salesforce imports\n- Determines the protocol and endpoint used for Salesforce communication\n- Influences request format, batch sizes, and response handling\n- Automatically converted to lowercase before processing\n- Affects available features and operation types\n- Impacts performance and error handling behavior\n\n**Api types**\n\n****soap** (Default - Most Common)**\n- **Purpose:** SOAP API with additional features like AllOrNone transaction control\n- **Best for:** Transactional imports requiring rollback capabilities\n- **Features:** AllOrNone header, enhanced transaction control, XML-based\n- **Limits:** Similar to REST but with more overhead\n- **Use when:** Need transaction rollback, legacy integrations\n- **Example:** \"Import invoices with all-or-none transaction guarantee\"\n\n        ****rest****\n- **Purpose:** Standard REST API for single-record and small-batch operations\n- **Best for:** Real-time imports, small datasets (< 200 records/call)\n- **Features:** Full CRUD operations, flexible querying, rich error messages\n- **Limits:** 2,000 records per API call\n- **Use when:** Standard import operations, real-time data sync\n- **Example:** \"Import leads from web form submissions\"\n\n****metadata****\n- **Purpose:** Metadata API for importing configuration and setup data\n- **Best for:** Deploying Salesforce configuration, not data records\n- **Features:** Deploy custom objects, fields, workflows, etc.\n- **Use when:** Deploying Salesforce configuration between orgs\n- **Example:** \"Deploy custom object definitions from non-production to production\"\n- **Note:** NOT for data records - only for metadata\n\n****compositerecord****\n- **Purpose:** Composite Graph API for related record trees\n- **Best for:** Creating multiple related records in one request\n- **Features:** Create parent-child relationships atomically\n- **Use when:** Importing hierarchical data (e.g., Account + Contacts + Opportunities)\n- **Example:** \"Import accounts with their related contacts in one operation\"\n\n**Implementation guidance**\n- **MUST ALWAYS SET THIS FIELD** - Never leave it undefined\n- **Default to \"rest\"** for most import operations (90% of use cases)\n- If prompt doesn't specify API type, use **\"soap\"**\n- Use **\"soap\"** only when AllOrNone transaction control is explicitly required\n- Use **\"metadata\"** only for configuration deployment, never for data\n- Use **\"compositerecord\"** for complex parent-child relationship imports\n- Consider API limits and performance characteristics for your data volume\n- Ensure the Salesforce connection supports the chosen API type\n\n**Default choice**\nWhen in doubt or when no API is mentioned in the prompt → **USE \"soap\"**\n\n**Examples**\n\n**Rest api (most common)**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"rest\",\n  \"sobject\": \"Lead\"\n}\n```\nPrompt: \"Import leads into Salesforce\"\n\n**Soap api with transaction control**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"soap\",\n  \"sobject\": \"Invoice__c\",\n  \"soap\": {\n    \"headers\": {\n      \"allOrNone\": true\n    }\n  }\n}\n```\nPrompt: \"Import invoices with all-or-none guarantee\"\n\n**Composite Record api**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"compositerecord\",\n  \"sobject\": \"Account\"\n}\n```\nPrompt: \"Import accounts with related contacts and opportunities\"\n\n**Important notes**\n- API value is case-insensitive (automatically converted to lowercase)\n- **REST API is the default and most commonly used**\n- SOAP API has slightly more overhead but provides transaction guarantees\n- Metadata API is for configuration only, not data records\n- Composite Record API requires careful relationship mapping\n- Each API has different rate limits and performance characteristics\n- Ensure your Salesforce connection and permissions support the chosen API"},"soap":{"type":"object","properties":{"headers":{"type":"object","properties":{"allOrNone":{"type":"boolean","description":"allOrNone specifies whether the Salesforce API operation should be executed atomically, ensuring that either all parts of the operation succeed or none do. When set to true, if any record in a batch operation fails, the entire transaction is rolled back and no changes are committed, preserving data integrity across the dataset. When set to false, the operation allows partial success, meaning some records may be processed and committed even if others fail, which can improve performance but requires careful error handling to manage partial failures. This property is primarily applicable to data manipulation operations such as create, update, delete, and upsert, and is critical for maintaining consistent and reliable data states in multi-record transactions.\n\n**Field behavior**\n- When true, enforces atomicity: all records in the batch must succeed or the entire operation is rolled back with no changes applied.\n- When false, permits partial success: successfully processed records are committed even if some records fail.\n- Applies mainly to batch DML operations including create, update, delete, and upsert.\n- Helps maintain data consistency by preventing partial updates when strict transactional integrity is required.\n- Influences how errors are reported and handled in the API response.\n- Affects transaction commit behavior and rollback mechanisms within the Salesforce platform.\n\n**Implementation guidance**\n- Set to true to guarantee strict data consistency and avoid partial data changes that could lead to inconsistent states.\n- Set to false to allow higher throughput and partial processing when some failures are acceptable or expected.\n- Default value is false if the property is omitted, allowing partial success by default.\n- When false, implement robust error handling to process partial success responses and handle individual record errors appropriately.\n- Verify that the target Salesforce API operation supports the allOrNone header before use.\n- Consider the impact on transaction duration and system resources when enabling atomic operations.\n- Use in scenarios where data integrity is paramount, such as financial or compliance-related data updates.\n\n**Examples**\n- allOrNone = true: Attempting to update 10 records where one record fails validation results in no records being updated, preserving atomicity.\n- allOrNone = false: Attempting to update 10 records where one record fails validation results in 9 records updated successfully and one failure reported.\n- allOrNone = true: Deleting multiple records in a batch will either delete all specified records or none if any deletion fails.\n- allOrNone = false: Creating multiple records where some violate unique constraints results in successful creation of valid records and"}},"description":"Headers to be included in the SOAP request sent to Salesforce, primarily used for authentication, session management, and transmitting additional metadata required by the Salesforce API. These headers form part of the SOAP envelope's header section and are essential for establishing and maintaining a valid session, specifying client options, and enabling various Salesforce API features. They can include standard headers such as SessionHeader for session identification, CallOptions for client-specific settings, and LoginScopeHeader for scoping login requests, as well as custom headers tailored to specific integration needs. Proper configuration of these headers ensures secure and successful communication with Salesforce services, enabling precise control over API interactions and session lifecycle.\n\n**Field behavior**\n- Defines the set of SOAP headers included in every request to the Salesforce API.\n- Facilitates passing of authentication credentials like session IDs or OAuth tokens.\n- Supports inclusion of metadata that controls API behavior or scopes requests.\n- Allows addition of custom headers required by specialized Salesforce integrations.\n- Headers persist across multiple API calls within the same session when set.\n- Modifies request context by influencing how Salesforce processes and authorizes API calls.\n- Enables fine-grained control over API call execution, such as setting query options or client identifiers.\n\n**Implementation guidance**\n- Construct headers as key-value pairs conforming to Salesforce’s SOAP header XML schema.\n- Include mandatory authentication headers such as SessionHeader with valid session IDs.\n- Validate all header values to prevent injection attacks or malformed requests.\n- Use headers to specify client information, API versioning, or organizational context as needed.\n- Ensure header names and structures strictly follow Salesforce SOAP API documentation.\n- Secure sensitive header data to prevent unauthorized access or leakage.\n- Update headers appropriately when session tokens expire or when switching contexts.\n- Test header configurations thoroughly to confirm compatibility with Salesforce API versions.\n- Consider using namespaces and proper XML serialization to maintain SOAP compliance.\n- Handle errors gracefully when headers are missing or invalid to improve debugging.\n\n**Examples**\n- `{ \"SessionHeader\": { \"sessionId\": \"00Dxx0000001gEREAY\" } }` — authenticates the session.\n- `{ \"CallOptions\": { \"client\": \"MyAppClient\" } }` — specifies client application details.\n- `{ \"LoginScopeHeader\": { \"organizationId\": \"00Dxx0000001gEREAY\" } }` — scopes login to a specific organization.\n- `{ \"CustomHeader\": { \"customField\": \"customValue\" } }` — example of a"},"batchSize":{"type":"number","description":"The number of records to be processed in each batch during Salesforce SOAP API operations. This property controls the size of data chunks sent or retrieved in a single API call, optimizing performance and resource utilization by balancing throughput and system resource consumption. Adjusting the batch size directly impacts the number of API calls made, the processing speed, memory usage, and network load, enabling fine-tuning of data operations to suit specific environment constraints and performance goals. Proper configuration of batchSize helps prevent API throttling, reduces the risk of timeouts, and ensures efficient handling of large datasets by segmenting data into manageable portions.\n\n**Field behavior**\n- Determines the count of records included in each batch request to the Salesforce SOAP API.\n- Directly influences the total number of API calls required when handling large datasets.\n- Affects processing throughput, latency, and memory usage during data operations.\n- Can be dynamically adjusted to optimize performance based on system capabilities and network conditions.\n- Impacts error handling and retry mechanisms by controlling batch granularity.\n- Influences network load and the likelihood of encountering API limits or timeouts.\n\n**Implementation guidance**\n- Choose a batch size that adheres to Salesforce API limits and recommended best practices.\n- Balance between larger batch sizes (which reduce API calls but increase memory and processing load) and smaller batch sizes (which increase API calls but reduce resource consumption).\n- Ensure the batchSize value is a positive integer within the allowed range for the specific Salesforce operation.\n- Continuously monitor API response times, error rates, and resource utilization to fine-tune the batch size for optimal performance.\n- Default to Salesforce-recommended batch sizes when uncertain or when starting configuration.\n- Consider network stability and latency when selecting batch size to minimize timeouts and failures.\n- Validate batch size compatibility with other integration components and middleware.\n- Test batch size settings in a staging environment before deploying to production to avoid unexpected failures.\n\n**Examples**\n- `batchSize: 200` — Processes 200 records per batch, a common default for many Salesforce operations.\n- `batchSize: 500` — Processes 500 records per batch, often the maximum allowed for certain Salesforce SOAP API calls.\n- `batchSize: 50` — Processes smaller batches suitable for environments with limited memory or network bandwidth.\n- `batchSize: 100` — A moderate batch size balancing throughput and resource usage in typical scenarios.\n\n**Important notes**\n- Salesforce SOAP API enforces maximum batch size limits (commonly 200"}},"description":"Specifies the comprehensive configuration settings required for integrating with the Salesforce SOAP API. This property includes all essential parameters to establish, authenticate, and manage SOAP-based communication with Salesforce services, enabling a wide range of operations such as creating, updating, deleting, querying, and executing actions on Salesforce objects. It encompasses detailed configurations for endpoint URLs, authentication credentials (such as session IDs or OAuth tokens), SOAP headers, session management details, timeout settings, and error handling mechanisms to ensure reliable, secure, and efficient interactions. The configuration must strictly adhere to Salesforce's WSDL definitions and security standards, supporting robust SOAP API usage within the application environment.\n\n**Field behavior**\n- Defines all connection, authentication, and operational parameters necessary for Salesforce SOAP API interactions.\n- Facilitates the construction and transmission of properly structured SOAP requests and the parsing of SOAP responses.\n- Manages the session lifecycle, including token acquisition, refresh, and timeout handling to maintain persistent connectivity.\n- Supports comprehensive error detection and handling of SOAP faults, API exceptions, and network issues to maintain integration stability.\n- Enables customization of SOAP headers, namespaces, and envelope structures as required by specific Salesforce API calls.\n- Allows configuration of retry policies and backoff strategies in response to transient errors or rate limiting.\n- Supports environment-specific configurations, such as production, non-production, or custom Salesforce domains.\n\n**Implementation guidance**\n- Configure endpoint URLs and authentication credentials accurately based on the Salesforce environment (production, non-production, or custom domains).\n- Validate SOAP envelope structure, namespaces, and compliance with the latest Salesforce WSDL specifications to prevent communication errors.\n- Implement robust error handling to capture, log, and respond appropriately to SOAP faults, API limit exceptions, and network failures.\n- Securely store and transmit sensitive information such as session IDs, OAuth tokens, passwords, and certificates, following best security practices.\n- Monitor Salesforce API usage limits and implement throttling or queuing mechanisms to avoid exceeding quotas and service disruptions.\n- Set timeout values thoughtfully to balance between preventing hanging requests and allowing sufficient time for valid operations to complete.\n- Regularly update the SOAP configuration to align with Salesforce API version changes, WSDL updates, and evolving security requirements.\n- Incorporate logging and monitoring to track SOAP request and response cycles for troubleshooting and performance optimization.\n- Ensure compatibility with Salesforce security protocols, including TLS versions and encryption standards.\n\n**Examples**\n- Specifying the Salesforce SOAP API endpoint URL along with a valid session ID or OAuth token for authentication.\n- Defining custom SOAP headers"},"sObjectType":{"type":"string","description":"sObjectType specifies the exact Salesforce object type that the API operation will target, such as standard objects like Account, Contact, Opportunity, or custom objects like CustomObject__c. This property is essential for defining the schema context of the API request, determining which fields are relevant, and directing the operation to the correct Salesforce data entity. It plays a critical role in data validation, processing logic, and endpoint routing within the Salesforce environment, ensuring that operations such as create, update, delete, or query are executed against the intended object type with precision. Accurate specification of sObjectType enables dynamic adjustment of request payloads and response handling based on the object’s schema and enforces compliance with Salesforce’s API naming conventions and permissions model.  \n**Field behavior:**  \n- Identifies the specific Salesforce object (sObject) type targeted by the API operation.  \n- Determines the applicable schema, field availability, and validation rules based on the selected object.  \n- Must exactly match a valid Salesforce object API name recognized by the connected Salesforce instance, including case sensitivity.  \n- Routes the API request to the appropriate Salesforce object endpoint for the intended operation.  \n- Influences dynamic field mapping, data transformation, and response parsing processes.  \n- Supports both standard and custom Salesforce objects, with custom objects requiring the “__c” suffix.  \n**Implementation guidance:**  \n- Validate sObjectType values against the current Salesforce object metadata in the target environment to prevent invalid requests.  \n- Enforce exact case-sensitive matching with Salesforce API object names (e.g., \"Account\" not \"account\").  \n- Support and recognize both standard objects and custom objects, ensuring custom objects include the “__c” suffix.  \n- Implement robust error handling for invalid, unsupported, or deprecated sObjectType values to provide clear feedback.  \n- Regularly update the list of allowed sObjectType values to reflect changes in the Salesforce schema or environment.  \n- Use sObjectType to dynamically tailor API request payloads, field selections, and response parsing logic.  \n- Consider object-level permissions and authentication scopes when processing requests involving specific sObjectTypes.  \n**Examples:**  \n- \"Account\"  \n- \"Contact\"  \n- \"Opportunity\"  \n- \"Lead\"  \n- \"CustomObject__c\"  \n- \"Case\"  \n- \"Campaign\"  \n**Important notes:**  \n- Custom Salesforce objects must be referenced using their exact API names, including the “__c” suffix.  \n-"},"idLookup":{"type":"object","properties":{"extract":{"type":"string","description":"Please specify which field from your export data should map to External ID. It is very important to pick a field that is both unique, and guaranteed to always have a value when exported (otherwise the Salesforce Upsert operation will fail). If sample data is available then a select list should be displayed here to help you find the right field from your export data. If sample data is not available then you will need to manually type the field id that should map to External ID.\n\n**Field behavior**\n- Specifies which field from the incoming data contains the External ID value\n- **Used for upsert operations ONLY**\n- Maps the incoming data field to the Salesforce External ID field specified in `upsert.externalIdField`\n- Must reference a field that exists in the incoming data payload\n- The field value must be unique and always populated to ensure reliable matching\n- Generates field mapping at runtime to match incoming records with Salesforce records\n\n**When to use**\n- **upsert operations ONLY** — REQUIRED when operation is \"upsert\"\n- Works together with `upsert.externalIdField` to enable External ID matching\n\n**When not to use**\n- **update** — use `whereClause` instead\n- **delete** — use `whereClause` instead\n- **addupdate** — use `whereClause` instead\n- **insert with ignoreExisting** — use `whereClause` instead\n\n**Implementation guidance**\n- Choose a field from incoming data that contains unique, non-null values\n- Ensure the field is always populated in the incoming records to prevent operation failures\n- This field's value will be matched against the Salesforce External ID field specified in `upsert.externalIdField`\n- Use the exact field name as it appears in the incoming data (case-sensitive)\n- If sample data is available, validate that the field exists and has appropriate values\n- Test with sample data to ensure matching works correctly before production deployment\n\n**Example**\n\n**Upsert by External id**\n```json\n{\n  \"operation\": \"upsert\",\n  \"idLookup\": {\n    \"extract\": \"CustomerNumber\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\nMaps incoming field \"CustomerNumber\" → Salesforce field \"External_ID__c\"\n\n**How it works:**\n- Incoming record has `{\"CustomerNumber\": \"CUST-12345\", \"Name\": \"John\"}`\n- System extracts \"CUST-12345\" from CustomerNumber\n- Searches Salesforce External_ID__c field for \"CUST-12345\"\n- If found: updates the existing record\n- If not found: creates a new record with External_ID__c = \"CUST-12345\"\n\n**Common field names**\n- \"ExternalId\" — incoming field containing external identifier\n- \"CustomerId\" — incoming field with unique customer number\n- \"AccountNumber\" — incoming field with account number\n- \"SourceSystemId\" — incoming field from source system\n\n**Important notes**\n- **ONLY use for upsert operations**\n- For all other operations (update, delete, addupdate, insert with ignoreExisting), use `whereClause` instead\n- Field must exist in incoming data and be consistently populated\n- Works with `upsert.externalIdField` to enable External ID matching\n- Missing or null values will cause the upsert operation to fail\n- Field name is case-sensitive and must match exactly\n- If both `extract` and `whereClause` are set, `extract` takes precedence for upsert"},"whereClause":{"type":"string","description":"To determine if a record exists in Salesforce, define a SOQL query WHERE clause for the sObject type selected above. For example, if you are importing contact records with a unique email, your WHERE clause should identify contacts with the same email.\n\nHandlebars statements are allowed, and more complex WHERE clauses can contain AND and OR logic. For example, when the product name alone is not unique, you can define a WHERE clause to find any records where the product name exists AND the product also belongs to a specific product family, resulting in a unique product record that matches both criteria.\n\nTo find string values that contain spaces, wrap them in single quotes, for example:\n\nName = 'John Smith'\n\n**Field behavior**\n- Specifies a SOQL conditional expression to match existing records in Salesforce\n- Omit the \"WHERE\" keyword - only provide the condition expression\n- Supports Handlebars {{fieldName}} syntax to reference incoming data fields\n- Supports SOQL operators: =, !=, >, <, >=, <=, LIKE, IN, NOT IN\n- Supports logical operators: AND, OR, NOT\n- Can use parentheses for grouping complex conditions\n- The clause is evaluated for each incoming record to find matching Salesforce records\n\n**When to use (REQUIRED FOR)**\n- **update** — REQUIRED to identify which records to update\n- **delete** — REQUIRED to identify which records to delete\n- **addupdate** — REQUIRED to identify existing records (update if found, insert if not)\n- **insert with ignoreExisting: true** — REQUIRED to check if records exist (skip if found)\n\n**When not to use**\n- **upsert** — DO NOT use; use `extract` instead\n- **insert without ignoreExisting** — Not needed for simple inserts\n\n**Implementation guidance**\n- Write the clause following SOQL syntax rules, omitting the \"WHERE\" keyword\n- Use {{fieldName}} syntax to reference values from incoming records\n- Wrap string values containing spaces in single quotes: `Name = '{{FullName}}'`\n- Numeric and boolean values don't need quotes: `Age = {{Age}}`\n- Use indexed fields (Email, External ID) for better performance\n- Keep the clause selective to minimize records returned\n- Validate syntax before deployment to prevent runtime errors\n\n**Examples**\n\n**Update by Email**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\nFinds records with matching Email for update\n\n**Insert with ignore existing by Email**\n```json\n{\n  \"operation\": \"insert\",\n  \"ignoreExisting\": true,\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\nChecks if a record with matching Email exists; if found, skips creation\n\n**Addupdate (create or update) by Email**\n```json\n{\n  \"operation\": \"addupdate\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\nUpdates if Email matches, creates new record if not found\n\n**Delete by Account Number**\n```json\n{\n  \"operation\": \"delete\",\n  \"idLookup\": {\n    \"whereClause\": \"AccountNumber = '{{AcctNum}}'\"\n  }\n}\n```\nFinds records with matching AccountNumber for deletion\n\n**Complex and logic**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{ProductName}}' AND ProductFamily__c = '{{Family}}'\"\n  }\n}\n```\nMatches records where both Name AND ProductFamily match\n\n**Complex or logic**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}' OR Phone = '{{Phone}}'\"\n  }\n}\n```\nMatches records where Email OR Phone matches\n\n**String with spaces**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{FullName}}'\"\n  }\n}\n```\nNote: Single quotes around {{FullName}} handle names with spaces like \"John Smith\"\n\n**Important notes**\n- **REQUIRED** for update, delete, addupdate, and insert (with ignoreExisting)\n- DO NOT use for upsert operations - use `extract` instead\n- Use {{fieldName}} to reference incoming data fields\n- String values with spaces must be wrapped in single quotes\n- Numeric and boolean values don't need quotes\n- Poor clause design can impact performance and hit governor limits\n- Always test in a non-production environment before production deployment"}},"description":"Configuration for looking up existing Salesforce records. **This object should ONLY contain either `extract` OR `whereClause` - NOT both, and NOT operation or other properties.**\n\n**What this object contains**\nThis object should contain ONLY ONE of these properties:\n- `extract` — for upsert operations ONLY\n- `whereClause` — for update, delete, addupdate, and insert (with ignoreExisting)\n\n**DO NOT include operation, upsert, or any other properties here** - those belong at the parent salesforce level.\n\n**Field behavior**\n- Contains two strategies for finding existing records: `extract` and `whereClause`\n- **`extract`**: Used ONLY for upsert operations\n- **`whereClause`**: Used for update, delete, addupdate, and insert (with ignoreExisting)\n- Generates field mapping or SOQL queries at runtime to match records\n- Critical for preventing duplicate record creation and enabling record updates\n\n**When to use each field**\n\n**Use `extract` (upsert ONLY)**\n- **upsert** — REQUIRED: Maps incoming field to External ID field\n\n**Use `whereClause` (all other operations)**\n- **update** — REQUIRED: Identifies records to update via SOQL query\n- **delete** — REQUIRED: Identifies records to delete via SOQL query\n- **addupdate** — REQUIRED: Identifies existing records via SOQL query\n- **insert with ignoreExisting: true** — REQUIRED: Checks if records exist via SOQL query\n\n**Leave empty**\n- **insert without ignoreExisting** — No lookup needed for simple inserts\n\n**What to set (idLookup object content)**\n\n**For upsert**\n```json\n{\n  \"extract\": \"CustomerNumber\"\n}\n```\n\n**For update/delete/addupdate**\n```json\n{\n  \"whereClause\": \"Email = '{{Email}}'\"\n}\n```\n\n**Parent context examples (for reference)**\nThese show the complete parent-level structure. **idLookup itself should only contain extract or whereClause.**\n\n**Upsert (parent context)**\n```json\n{\n  \"operation\": \"upsert\",\n  \"idLookup\": {\n    \"extract\": \"CustomerNumber\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\n\n**Update (parent context)**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\n\n**Implementation guidance**\n- Use `extract` ONLY for upsert operations (with `upsert.externalIdField`)\n- Use `whereClause` for update, delete, addupdate, and insert with ignoreExisting\n- Do not set both `extract` and `whereClause` for the same operation\n- **ONLY set extract or whereClause - do not nest operation or other properties**\n- Ensure the lookup strategy matches the operation type\n- Test thoroughly in a non-production environment before production deployment\n\n**Important notes**\n- **This object ONLY contains `extract` or `whereClause`**\n- **`extract`** is for upsert ONLY - maps to External ID field\n- **`whereClause`** is for all other operations requiring lookups\n- Do NOT nest `operation`, `upsert`, or other properties here - those belong at the parent salesforce level\n- For upsert: `extract` + `upsert.externalIdField` work together (both at salesforce level)\n- For other operations: `whereClause` enables SOQL-based record matching\n- Incorrect configuration can cause duplicates or failed operations"},"upsert":{"type":"object","properties":{"externalIdField":{"type":"string","description":"Specifies the API name of the External ID field in Salesforce that will be used to match records during an upsert operation. This is the TARGET field in Salesforce, while `idLookup.extract` specifies the SOURCE field from incoming data.\n\n**Field behavior**\n- Identifies which Salesforce field serves as the external ID for record matching\n- Must be a field marked as \"External ID\" in Salesforce object configuration\n- Works together with `idLookup.extract` to enable upsert matching\n- The field must be indexed and unique in Salesforce\n- Supports text, number, and email field types configured as external IDs\n- Required when `operation: \"upsert\"`\n\n**How it works with idLookup.extract**\n- **`idLookup.extract`**: Specifies which incoming data field contains the matching value (e.g., \"CustomerNumber\")\n- **`externalIdField`**: Specifies which Salesforce field to match against (e.g., \"External_ID__c\")\n- Together they create the match: incoming \"CustomerNumber\" → Salesforce \"External_ID__c\"\n\n**Implementation guidance**\n- Use the exact Salesforce API name of the field (case-sensitive)\n- Verify the field is marked as \"External ID\" in Salesforce object settings\n- Ensure the field is indexed for optimal performance\n- Confirm the integration user has read/write access to this field\n- The field must have unique values to prevent ambiguous matches\n- Test in a non-production environment before production to verify configuration\n\n**Examples**\n\n**Standard External id field**\n```\n\"AccountNumber\"\n```\nStandard Account field marked as External ID\n\n**Custom External id field**\n```\n\"Custom_External_Id__c\"\n```\nCustom field marked as External ID\n\n**Email as External id**\n```\n\"Email\"\n```\nEmail field configured as External ID on Contact\n\n**NetSuite id as External id**\n```\n\"netsuite_conn__NetSuite_Id__c\"\n```\nNetSuite connector External ID field\n\n**How it works (Parent Context)**\nAt the parent `salesforce` level, the complete configuration looks like:\n```json\n{\n  \"operation\": \"upsert\",\n  \"idLookup\": {\n    \"extract\": \"CustomerNumber\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\nWhere:\n- `operation: \"upsert\"` — at salesforce level\n- `idLookup.extract: \"CustomerNumber\"` — incoming field, at salesforce level\n- `upsert.externalIdField: \"External_ID__c\"` — **THIS field**, inside upsert object\n\n**Common external id fields**\n- \"AccountNumber\" — standard Account field\n- \"Email\" — when configured as External ID on Contact\n- \"Custom_External_Id__c\" — custom field on any object\n- \"External_ID__c\" — common naming convention\n- \"Source_System_Id__c\" — for multi-system integrations\n\n**Important notes**\n- Field must be explicitly marked as \"External ID\" in Salesforce\n- Not all fields can be External IDs (must be text, number, or email type)\n- Missing or null values in this field will cause upsert to fail\n- If multiple records match the External ID, upsert will fail\n- Always use with `idLookup.extract` for upsert operations"}},"description":"Configuration for Salesforce upsert operations. This object contains ONLY the `externalIdField` property.\n\n**CRITICAL: This component is REQUIRED when `operation: \"upsert\"`. Always include this component for upsert operations.**\n\n**Field behavior**\n- **REQUIRED** when `operation: \"upsert\"`\n- Contains configuration specific to upsert behavior\n- **ONLY contains ONE property: `externalIdField`**\n- Works in conjunction with `idLookup.extract` (which is at the SAME level as this upsert object, NOT nested within it)\n\n**When to use**\n- **REQUIRED** when `operation` is \"upsert\" - MUST be included\n- Leave undefined for insert, update, delete, or addupdate operations\n- Without this object, upsert operations will fail\n\n**Why IT'S required**\n- Upsert operations need to know which Salesforce field to use for matching\n- The `externalIdField` property specifies the Salesforce External ID field\n- This works with `idLookup.extract` to enable record matching\n- Missing this object will cause the import to fail\n\n**Decision**\n**Return true/include this component if `operation: \"upsert\"` is set. This is REQUIRED for all upsert operations.**\n\n**What to set**\n**This object should ONLY contain:**\n```json\n{\n  \"externalIdField\": \"External_ID__c\"\n}\n```\n\n**DO NOT include operation, idLookup, or any other properties here** - those belong at the parent salesforce level.\n\n**Parent context (for reference only)**\nAt the parent `salesforce` level, you will also set:\n- `operation: \"upsert\"` (at salesforce level)\n- `idLookup.extract` (at salesforce level)\n- `upsert.externalIdField` (THIS object)\n\n**Implementation guidance**\n- Always set `externalIdField` when using upsert\n- Ensure the specified External ID field exists in Salesforce\n- Verify the field is marked as \"External ID\" in Salesforce\n- **ONLY set the externalIdField property - nothing else**\n\n**Important notes**\n- **This object ONLY contains `externalIdField`**\n- Do NOT nest `operation`, `idLookup`, or other properties here\n- Those properties belong at the parent salesforce level, not nested in this upsert object\n- May be extended with additional upsert-specific options in the future\n- Only relevant when operation is \"upsert\""},"upsertpicklistvalues":{"type":"object","properties":{"type":{"type":"string","enum":["picklist","multipicklist"],"description":"Specifies the type of picklist field being managed. Salesforce supports single-select picklists and multi-select picklists.\n\n**Field behavior**\n- Determines whether the picklist allows single or multiple value selection\n- Affects UI rendering and validation in Salesforce\n- Automatically converted to lowercase before processing\n\n**Picklist types**\n\n****picklist****\n- Standard single-select picklist\n- User can select only one value at a time\n- Most common picklist type\n\n****multipicklist****\n- Multi-select picklist\n- User can select multiple values (separated by semicolons)\n- Use for categories, tags, or multi-value selections\n\n**Examples**\n- \"picklist\" — for Status, Priority, Category fields\n- \"multipicklist\" — for Tags, Skills, Interests fields"},"fullName":{"type":"string","description":"The API name of the picklist field in Salesforce, including the object name.\n\n**Field behavior**\n- Fully qualified field name: ObjectName.FieldName__c\n- Must match exact Salesforce API name\n- Case-sensitive\n- Required for picklist value upsert operations\n\n**Examples**\n- \"Account.MyPicklist__c\"\n- \"Contact.Status__c\"\n- \"CustomObject__c.Category__c\""},"label":{"type":"string","description":"The display label for the picklist field in Salesforce UI.\n\n**Field behavior**\n- Human-readable label shown in Salesforce interface\n- Can contain spaces and special characters\n- Does not need to match API name\n\n**Examples**\n- \"My Picklist\"\n- \"Product Category\"\n- \"Customer Status\""},"visibleLines":{"type":"number","description":"For multi-select picklists, specifies how many lines are visible in the UI without scrolling.\n\n**Field behavior**\n- Only applies to multipicklist fields\n- Controls height of the picklist selection box\n- Default is typically 4-5 lines\n\n**Examples**\n- 3 — shows 3 values at once\n- 5 — shows 5 values at once\n- 10 — shows more values for large lists"}},"description":"Configuration for managing Salesforce picklist field metadata when using the `upsertpicklistvalues` operation. This enables dynamic creation or updates to picklist field definitions.\n\n**Field behavior**\n- Used ONLY when `operation: \"upsertpicklistvalues\"`\n- Manages picklist field metadata, not picklist values themselves\n- Enables programmatic picklist field management\n- Requires appropriate Salesforce metadata permissions\n\n**When to use**\n- When operation is set to \"upsertpicklistvalues\"\n- For managing picklist field definitions programmatically\n- When syncing picklist configurations between orgs\n- For automated picklist field deployment\n\n**Important notes**\n- This manages the FIELD itself, not the individual picklist VALUES\n- Requires Salesforce API and Metadata API permissions\n- Should only be set when operation is \"upsertpicklistvalues\"\n- Most imports won't use this - it's for metadata management"},"removeNonSubmittableFields":{"type":"boolean","description":"Indicates whether fields that are not eligible for submission to Salesforce—such as read-only, system-generated, formula, or otherwise restricted fields—should be automatically excluded from the data payload before it is sent. This feature helps prevent submission errors by ensuring that only valid, submittable fields are included in the API request, thereby improving data integrity and reducing the likelihood of operation failures. It operates exclusively on the outgoing submission payload without modifying the original source data, making it a safe and effective way to sanitize data before interaction with Salesforce APIs.\n\n**Field behavior**\n- When enabled (set to true), the system analyzes the outgoing data payload and removes any fields identified as non-submittable based on Salesforce metadata, field-level security, and API constraints.\n- When disabled (set to false) or omitted, all fields present in the payload are submitted as-is, which may lead to errors if restricted, read-only, or system-managed fields are included.\n- Enhances data integrity by proactively filtering out fields that could cause rejection or failure during Salesforce data operations.\n- Applies only to the outgoing submission payload, ensuring the original source data remains unchanged and intact.\n- Dynamically adapts to different Salesforce objects, API versions, and user permissions by referencing up-to-date metadata and security settings.\n\n**Implementation guidance**\n- Enable this flag in integration scenarios where the data schema may include fields that Salesforce does not accept for the target object or API version.\n- Maintain an up-to-date cache or real-time access to Salesforce metadata to accurately reflect field-level permissions, submittability, and API changes.\n- Implement logging or reporting mechanisms to track which fields are removed, aiding in auditing, troubleshooting, and transparency.\n- Use in conjunction with other data validation, transformation, and permission-checking steps to ensure clean, compliant data submissions.\n- Thoroughly test in development and staging environments to understand the impact on data flows, error handling, and downstream processes.\n- Consider user roles and permissions, as submittability may vary depending on the authenticated user's access rights and profile settings.\n\n**Examples**\n- `removeNonSubmittableFields: true` — Automatically removes read-only or system-managed fields such as CreatedDate, LastModifiedById, or formula fields before submission.\n- `removeNonSubmittableFields: false` — Submits all fields, potentially causing errors if restricted or read-only fields are included.\n- Omitting the property defaults to false, meaning no automatic removal of non-submittable"},"document":{"type":"object","properties":{"id":{"type":"string","description":"Unique identifier for the document within the Salesforce system, serving as the primary key that distinctly identifies each document record. This identifier is immutable once assigned and must not be altered or reused to maintain data integrity. It is essential for referencing the document in API calls, queries, integrations, and any operations involving the document. The ID is a base-62 encoded string that can be either 15 characters long, which is case-sensitive, or 18 characters long, which includes a case-insensitive checksum to enhance reliability in external systems. Proper handling of this ID ensures accurate linkage and manipulation of document records across various Salesforce processes and integrations.  \n**Field behavior:**  \n- Acts as the primary key uniquely identifying a document record within Salesforce.  \n- Immutable after creation; should never be changed or reassigned.  \n- Used consistently to reference the document across API calls, queries, and integrations.  \n- Case-sensitive and may vary in length (15 or 18 characters).  \n- Serves as a critical reference point for establishing relationships with other records.  \n**Implementation guidance:**  \n- Must be generated exclusively by Salesforce or the managing system to guarantee uniqueness.  \n- Should be handled as a string to accommodate Salesforce’s ID formats.  \n- Validate incoming IDs against Salesforce ID format standards to ensure correctness.  \n- Avoid manual generation or modification of IDs to prevent data inconsistencies.  \n- Use the 18-character format when possible to leverage the checksum for case-insensitive environments.  \n**Examples:**  \n- \"0151t00000ABCDE\"  \n- \"0151t00000ABCDEAAA\"  \n**Important notes:**  \n- IDs are case-sensitive; 15-character IDs are case-sensitive, while 18-character IDs include a case-insensitive checksum.  \n- Do not manually create or alter IDs; always use Salesforce-generated values.  \n- This ID is critical for performing document updates, deletions, retrievals, and establishing relationships.  \n- Using the correct ID format is essential for API compatibility and data integrity.  \n**Dependency chain:**  \n- Required for all operations on salesforce.document entities, including CRUD actions.  \n- Other fields and processes may reference this ID to link related records or trigger actions.  \n- Integral to workflows, triggers, and integrations that depend on document identification.  \n**Technical** DETAILS:  \n- Salesforce record IDs are base-62 encoded strings.  \n- IDs can be 15 characters (case-sensitive) or 18"},"name":{"type":"string","description":"The name of the document as stored in Salesforce, serving as a unique and human-readable identifier within the Salesforce environment. This string value represents the document's title or label, enabling users and systems to easily recognize, retrieve, and manage the document across user interfaces, search functionalities, and API interactions. It is a critical attribute used to reference the document in various operations such as creation, updates, searches, display contexts, and integration workflows, ensuring clear identification and organization within folders or libraries. The name plays a pivotal role in document management by facilitating sorting, filtering, and categorization, and it must adhere to Salesforce’s naming conventions and constraints to maintain system integrity and usability.\n\n**Field behavior**\n- Acts as the primary human-readable identifier for the document.\n- Used in UI elements, search queries, and API calls to locate or reference the document.\n- Typically mandatory when creating or updating document records.\n- Should be unique within the scope of the relevant Salesforce object, folder, or library to avoid ambiguity.\n- Influences sorting, filtering, and organization of documents within lists or folders.\n- Changes to the name may trigger updates or validations in dependent systems or references.\n- Supports international characters to accommodate global users.\n\n**Implementation guidance**\n- Choose descriptive yet concise names to enhance clarity, usability, and discoverability.\n- Validate input to exclude unsupported special characters and ensure compliance with Salesforce naming rules.\n- Enforce uniqueness constraints within the relevant context to prevent conflicts and confusion.\n- Consider the impact of renaming documents on existing references, hyperlinks, integrations, and audit trails.\n- Account for potential length restrictions and implement truncation or user prompts as necessary.\n- Support internationalization by allowing UTF-8 encoded characters where appropriate.\n- Implement validation to prevent use of reserved words or prohibited characters.\n\n**Examples**\n- \"Quarterly Sales Report Q1 2024\"\n- \"Employee Handbook\"\n- \"Project Plan - Alpha Release\"\n- \"Marketing Strategy Overview\"\n- \"Client Contract - Acme Corp\"\n\n**Important notes**\n- The maximum length of the name may be limited (commonly up to 255 characters), depending on Salesforce configuration.\n- Renaming a document can affect external references, hyperlinks, or integrations relying on the original name.\n- Case sensitivity in name comparisons may vary based on Salesforce settings and should be considered when implementing logic.\n- Special characters and whitespace handling should align with Salesforce’s accepted standards to avoid errors.\n- Avoid using reserved words or prohibited characters to ensure compatibility and"},"folderId":{"type":"string","description":"The unique identifier of the folder within Salesforce where the document is stored. This ID is essential for organizing, categorizing, and managing documents efficiently within the Salesforce environment. It enables precise association of documents to specific folders, facilitating streamlined retrieval, access control, and hierarchical organization of document assets. By linking a document to a folder, it inherits the folder’s permissions and visibility settings, ensuring consistent access management and supporting structured document workflows. Proper use of this identifier allows for effective filtering, querying, and management of documents based on their folder location, which is critical for maintaining an organized and secure document repository.\n\n**Field behavior**\n- Identifies the exact folder containing the document.\n- Associates the document with a designated folder for structured organization.\n- Must reference a valid, existing folder ID within Salesforce.\n- Typically represented as a string conforming to Salesforce ID formats (15- or 18-character).\n- Influences document visibility and access based on folder permissions.\n- Changing the folderId moves the document to a different folder, affecting its context and access.\n- Inherits folder-level sharing and security settings automatically.\n\n**Implementation guidance**\n- Verify that the folderId exists and is accessible in the Salesforce instance before assignment.\n- Validate the folderId format to ensure it matches Salesforce’s 15- or 18-character ID standards.\n- Use this field to filter, query, or categorize documents by their folder location in API operations.\n- When creating or updating documents, setting this field assigns or moves the document to the specified folder.\n- Handle permission checks to ensure the user or integration has rights to assign or access the folder.\n- Consider the impact on sharing rules, workflows, and automation triggered by folder changes.\n- Ensure case sensitivity is preserved when handling folder IDs.\n\n**Examples**\n- \"00l1t000003XyzA\" (example of a Salesforce folder ID)\n- \"00l5g000004AbcD\"\n- \"00l9m00000FghIj\"\n\n**Important notes**\n- The folderId must be valid and the folder must be accessible by the user or integration performing the operation.\n- Using an incorrect or non-existent folderId will cause errors or result in improper document categorization.\n- Folder-level permissions in Salesforce can restrict the ability to assign documents or access folder contents.\n- Changes to folderId may affect document sharing, visibility settings, and trigger related workflows.\n- Folder IDs are case-sensitive and must be handled accordingly.\n- Moving documents between folders"},"contentType":{"type":"string","description":"The MIME type of the document, specifying the exact format of the file content stored within the Salesforce document. This field enables systems and applications to accurately identify, process, render, or handle the document by indicating its media type, such as PDF, image, audio, or text formats. It adheres to standard MIME type conventions (type/subtype) and may include additional parameters like character encoding when necessary. Properly setting this field ensures seamless integration, correct display, appropriate application usage, and reliable content negotiation across diverse platforms and services. It plays a critical role in workflows, security scanning, compliance validation, and API interactions by guiding how the document is stored, transmitted, and interpreted.\n\n**Field behavior**\n- Defines the media type of the document content to guide processing, rendering, and handling.\n- Identifies the specific file format (e.g., PDF, JPEG, DOCX) for compatibility and validation purposes.\n- Facilitates content negotiation and appropriate response handling between systems and applications.\n- Typically formatted as a standard MIME type string (type/subtype), optionally including parameters such as charset.\n- Influences document handling in workflows, security scans, user interfaces, and API interactions.\n- Serves as a key attribute for determining how the document is stored, transmitted, and displayed.\n\n**Implementation guidance**\n- Always use valid MIME types conforming to official standards (e.g., \"application/pdf\", \"image/png\") to ensure broad compatibility.\n- Verify that the contentType accurately reflects the actual file content to prevent processing errors or misinterpretation.\n- Update the contentType promptly if the document format changes to maintain consistency and reliability.\n- Utilize this field to enable correct content handling in APIs, integrations, client applications, and automated workflows.\n- Include relevant parameters (such as charset for text-based documents) when applicable to provide additional context.\n- Consider validating the contentType against the file extension and content during upload or processing to enhance data integrity.\n\n**Examples**\n- \"application/pdf\" for PDF documents.\n- \"image/jpeg\" for JPEG image files.\n- \"application/vnd.openxmlformats-officedocument.wordprocessingml.document\" for Microsoft Word DOCX files.\n- \"text/plain; charset=utf-8\" for plain text files with UTF-8 encoding.\n- \"application/json\" for JSON formatted documents.\n- \"audio/mpeg\" for MP3 audio files.\n\n**Important notes**\n- Incorrect or mismatched contentType values can cause improper document rendering, processing failures,"},"developerName":{"type":"string","description":"developerName is a unique, developer-friendly identifier assigned to the document within the Salesforce environment. It acts as a stable and consistent reference used primarily in programmatic contexts such as Apex code, API calls, metadata configurations, deployment scripts, and integration workflows. This identifier ensures reliable access, manipulation, and management of the document across various Salesforce components and external systems. The value should be concise, descriptive, and strictly adhere to Salesforce naming conventions to prevent conflicts and ensure maintainability. Typically, it employs camelCase or underscores, excludes spaces and special characters, and is designed to remain unchanged over time to avoid breaking dependencies or integrations.\n\n**Field behavior**\n- Serves as a unique and immutable identifier for the document within the Salesforce org.\n- Widely used in code, APIs, metadata, deployment processes, and integrations to reference the document reliably.\n- Should remain stable and not be altered frequently to maintain integration and system stability.\n- Enforces naming conventions that disallow spaces and special characters, favoring camelCase or underscores.\n- Case-insensitive in many contexts but consistent casing is recommended for clarity and maintainability.\n- Acts as a key reference point in metadata relationships and dependency mappings.\n\n**Implementation guidance**\n- Ensure the developerName is unique within the Salesforce environment to prevent naming collisions.\n- Follow Salesforce naming rules: start with a letter, use only alphanumeric characters and underscores.\n- Avoid reserved keywords, spaces, special characters, and punctuation marks.\n- Keep the identifier concise yet descriptive enough to clearly convey the document’s purpose or function.\n- Validate the developerName against Salesforce’s character set, length restrictions, and naming policies before deployment.\n- Plan for the developerName to remain stable post-deployment to avoid breaking references and integrations.\n- Use consistent casing conventions (camelCase or underscores) to improve readability and reduce errors.\n- Incorporate meaningful terms that reflect the document’s role or content for easier identification.\n\n**Examples**\n- \"InvoiceTemplate\"\n- \"CustomerAgreementDoc\"\n- \"SalesReport2024\"\n- \"ProductCatalog_v2\"\n- \"EmployeeOnboardingGuide\"\n\n**Important notes**\n- Changing the developerName after deployment can disrupt integrations, metadata references, and dependent components.\n- It is distinct from the document’s display name, which can include spaces and be more user-friendly.\n- While Salesforce treats developerName as case-insensitive in many scenarios, maintaining consistent casing improves readability and maintenance.\n- Commonly referenced in metadata APIs, deployment tools, automation scripts, and"},"isInternalUseOnly":{"type":"boolean","description":"isInternalUseOnly indicates whether the document is intended exclusively for internal use within the organization and must not be shared with external parties. This flag is critical for protecting sensitive, proprietary, or confidential information by restricting access strictly to authorized internal users. It serves as a definitive marker within the document’s metadata to enforce internal visibility policies, prevent accidental or unauthorized external distribution, and support compliance with organizational security standards. Proper handling of this flag ensures that internal communications, strategic plans, financial data, and other sensitive materials remain protected from exposure outside the company.\n\n**Field behavior**\n- When set to true, the document is strictly restricted to internal users and must not be accessible or shared with any external entities.\n- When false or omitted, the document may be accessible to external users, subject to other access control mechanisms.\n- Commonly used to flag documents containing sensitive business strategies, internal communications, proprietary data, or confidential policies.\n- Influences user interface elements and API responses to hide, restrict, or clearly label document visibility accordingly.\n- Changes to this flag can trigger notifications, workflow actions, or audit events to ensure proper handling and compliance.\n- May affect document indexing, search results, and export capabilities to prevent unauthorized dissemination.\n\n**Implementation guidance**\n- Always verify this flag before granting document access in both user interfaces and backend APIs to ensure compliance with internal use policies.\n- Use in conjunction with role-based access controls, group memberships, document classification labels, and data sensitivity tags to enforce comprehensive security.\n- Carefully manage updates to this flag to prevent inadvertent exposure of confidential information; consider implementing approval workflows and multi-level reviews for changes.\n- Implement audit logging for all changes to this flag to support compliance, traceability, and forensic investigations.\n- Ensure all integrated systems, third-party applications, and data export mechanisms respect this flag to maintain consistent access restrictions.\n- Provide clear UI indicators, warnings, or access restrictions to users when accessing documents marked as internal use only.\n- Regularly review and validate the accuracy of this flag to maintain up-to-date security postures.\n\n**Examples**\n- true: A confidential internal report on upcoming product launches accessible only to employees.\n- false: A publicly available user manual or product datasheet intended for customers and partners.\n- true: Internal HR policies and procedures documents restricted to company staff.\n- false: Press releases or marketing materials distributed externally.\n- true: Financial forecasts and budget plans shared exclusively within the finance department.\n- true: Internal audit findings and compliance reports"},"isPublic":{"type":"boolean","description":"Indicates whether the document is publicly accessible to all users or restricted exclusively to authorized users within the system. This property governs the document's visibility and access permissions, determining if authentication is required to view its content. When set to true, the document becomes openly available without any access controls, allowing anyone—including unauthenticated users—to access it freely. Conversely, setting this flag to false enforces security measures that limit access based on user roles, permissions, and organizational sharing rules, ensuring that only authorized personnel can view the document. This setting directly impacts how the document appears in search results, sharing interfaces, and external indexing services, making it a critical factor in managing data privacy, compliance, and information governance.\n\n**Field behavior**\n- When true, the document is accessible by anyone, including unauthenticated users.\n- When false, access is restricted to users with explicit permissions or roles.\n- Directly influences the document’s visibility in search results, sharing interfaces, and external indexing.\n- Changes to this property take immediate effect, altering who can view the document.\n- Modifications may trigger updates in access control enforcement mechanisms.\n\n**Implementation guidance**\n- Use this property to clearly define and enforce document sharing and visibility policies.\n- Always set to false for sensitive, confidential, or proprietary documents to prevent unauthorized access.\n- Implement strict validation to ensure only users with appropriate administrative rights can modify this property.\n- Consider organizational compliance requirements, privacy regulations, and data security standards before making documents public.\n- Regularly monitor and audit changes to this property to maintain security oversight and compliance.\n- Plan changes carefully, as toggling this setting can have immediate and broad impact on document accessibility.\n\n**Examples**\n- true: A publicly available product catalog, marketing brochure, or press release intended for external audiences.\n- false: Internal project plans, financial reports, HR documents, or any content restricted to company personnel.\n\n**Important notes**\n- Making a document public may expose it to indexing by search engines if accessible via public URLs.\n- Changes to this property have immediate effect on document accessibility; ensure appropriate change management.\n- Ensure alignment with corporate governance, legal requirements, and data protection policies when toggling public access.\n- Public documents should be reviewed periodically to confirm that their visibility remains appropriate.\n- Unauthorized changes to this property can lead to data leaks or compliance violations.\n\n**Dependency chain**\n- Access control depends on user roles, profiles, and permission sets configured within the system.\n- Interacts"}},"description":"The 'document' property represents a digital file or record associated with a Salesforce entity, encompassing a wide range of file types such as attachments, reports, images, contracts, spreadsheets, and other files stored within the Salesforce platform. It acts as a comprehensive container that includes both the file's binary content—typically encoded in base64—and its associated metadata, such as file name, description, MIME type, and tags. This property enables seamless integration, retrieval, upload, update, and management of files within Salesforce-related workflows and processes. By linking relevant documents directly to Salesforce records, it enhances data organization, accessibility, collaboration, and compliance across various business functions. Additionally, it supports versioning, audit trails, and can trigger automation processes like workflows, validation rules, and triggers, ensuring robust document lifecycle management within the Salesforce ecosystem.\n\n**Field behavior**\n- Contains both the binary content (usually base64-encoded) and descriptive metadata of a file linked to a Salesforce record.\n- Supports a diverse array of file types including PDFs, images, reports, contracts, spreadsheets, and other common document formats.\n- Facilitates operations such as uploading new files, retrieving existing files, updating metadata, deleting documents, and managing versions within Salesforce.\n- May be required or optional depending on the specific API operation, object context, or organizational business rules.\n- Changes to the document content or metadata can trigger Salesforce workflows, triggers, validation rules, or automation processes.\n- Maintains versioning and audit trails when integrated with Salesforce Content or Files features.\n- Respects Salesforce security and sharing settings to control access and protect sensitive information.\n\n**Implementation guidance**\n- Encode the document content in base64 format when transmitting via API to ensure data integrity and compatibility.\n- Validate file size and type against Salesforce platform limits and organizational policies before upload to prevent errors.\n- Provide comprehensive metadata including file name, description, content type (MIME type), file extension, and relevant tags or categories to facilitate search, classification, and governance.\n- Manage user permissions and access controls in accordance with Salesforce security and sharing settings to safeguard sensitive data.\n- Use appropriate Salesforce API endpoints (e.g., /sobjects/Document, Attachment, ContentVersion) and HTTP methods aligned with the document type and intended operation.\n- Implement robust error handling to manage responses related to file size limits, unsupported formats, permission denials, or network issues.\n- Leverage Salesforce Content or Files features for enhanced document management capabilities such as version control, sharing,"},"attachment":{"type":"object","properties":{"id":{"type":"string","description":"Unique identifier for the attachment within the Salesforce system, serving as the immutable primary key for the attachment object. This ID is essential for accurately referencing, retrieving, updating, or deleting the specific attachment record through API calls. It must conform to Salesforce's standardized ID format, which can be either a 15-character case-sensitive or an 18-character case-insensitive alphanumeric string, with the 18-character version preferred for integrations due to its case insensitivity and reduced risk of errors. While the ID itself does not contain metadata about the attachment's content, it is crucial for linking the attachment to related Salesforce records and ensuring precise operations on the attachment resource across the platform.\n\n**Field behavior**\n- Acts as the unique and immutable primary key for the attachment record.\n- Required for all API operations involving the attachment, including retrieval, updates, and deletions.\n- Ensures consistent and precise identification of the attachment in all Salesforce API interactions.\n- Remains constant and unchangeable throughout the lifecycle of the attachment once created.\n\n**Implementation guidance**\n- Must strictly adhere to Salesforce ID formats: either 15-character case-sensitive or 18-character case-insensitive alphanumeric strings.\n- Should be dynamically obtained from Salesforce API responses when creating or querying attachments to ensure accuracy.\n- Avoid hardcoding or manually entering IDs to prevent data integrity issues and API errors.\n- Validate the ID format programmatically before use to ensure compatibility with Salesforce API requirements.\n- Prefer using the 18-character ID in integration scenarios to minimize case sensitivity issues.\n\n**Examples**\n- \"00P1t00000XyzAbCDE\"\n- \"00P1t00000XyzAbCDEAAA\"\n- \"00P3t00000LmNOpQR1\"\n\n**Important notes**\n- The 15-character ID is case-sensitive, whereas the 18-character ID is case-insensitive and recommended for integration scenarios.\n- Using an incorrect, malformed, or outdated ID will result in API errors or failed operations.\n- The ID does not convey any descriptive information about the attachment’s content or metadata.\n- Always retrieve and use the ID directly from Salesforce API responses to ensure validity and prevent errors.\n- The ID is unique within the Salesforce org and cannot be reused or duplicated.\n\n**Dependency chain**\n- Generated automatically by Salesforce upon creation of the attachment record.\n- Utilized by other API calls and properties that require precise attachment identification.\n- Linked to parent or related Salesforce objects, integrating the attachment"},"name":{"type":"string","description":"The name property represents the filename of the attachment within Salesforce and serves as the primary identifier for the attachment. It is crucial for enabling easy recognition, retrieval, and management of attachments both in the user interface and through API operations. This property typically includes the file extension, which specifies the file type and ensures proper handling, display, and processing of the file across different systems and platforms. The filename should be meaningful, descriptive, and concise to provide clarity and ease of use, especially when managing multiple attachments linked to a single parent record. Adhering to proper naming conventions helps maintain organization, improves searchability, and prevents conflicts or ambiguities that could arise from duplicate or unclear filenames.\n\n**Field behavior**\n- Stores the filename of the attachment as a string value.\n- Acts as the key identifier for the attachment within the Salesforce environment.\n- Displayed prominently in the Salesforce UI and API responses when viewing or managing attachments.\n- Should be unique within the context of the parent record to avoid confusion.\n- Includes file extensions (e.g., .pdf, .jpg, .docx) to denote the file type and facilitate correct processing.\n- Case sensitivity may apply depending on the context, affecting retrieval and display.\n- Used in URL generation and API endpoints, impacting accessibility and referencing.\n\n**Implementation guidance**\n- Validate filenames to exclude prohibited characters (such as \\ / : * ? \" < > |) and other special characters that could cause errors or security vulnerabilities.\n- Ensure the filename length does not exceed Salesforce’s maximum limit, typically 255 characters.\n- Always include the appropriate and accurate file extension to enable correct file type recognition.\n- Use clear, descriptive, and consistent naming conventions that reflect the content or purpose of the attachment.\n- Perform validation checks before upload to prevent failures or inconsistencies.\n- Avoid renaming attachments post-upload to maintain integrity of references and links.\n- Consider localization and encoding standards to support international characters where applicable.\n\n**Examples**\n- \"contract_agreement.pdf\"\n- \"profile_picture.jpg\"\n- \"sales_report_q1_2024.xlsx\"\n- \"meeting_notes.txt\"\n- \"project_plan_v2.docx\"\n\n**Important notes**\n- Filenames are case-sensitive in certain contexts, which can impact retrieval and display.\n- Avoid special characters that may interfere with URLs, file systems, or API processing.\n- Renaming an attachment after upload can disrupt existing references or links to the file.\n- Consistency between the filename and the actual content enhances user"},"parentId":{"type":"string","description":"The unique identifier of the parent Salesforce record to which the attachment is linked. This field establishes a direct and essential relationship between the attachment and its parent object, such as an Account, Contact, Opportunity, or any other Salesforce record type that supports attachments. It ensures that attachments are correctly associated within the Salesforce data model, enabling accurate retrieval, management, and organization of attachments in relation to their parent records. By linking attachments to their parent records, this field facilitates data integrity, efficient querying, and seamless navigation between related records. It is a mandatory field when creating or updating attachments and must reference a valid, existing Salesforce record ID. Proper use of this field guarantees referential integrity, enforces access controls based on parent record permissions, and impacts the visibility and lifecycle of the attachment within the Salesforce environment.\n\n**Field behavior**\n- Represents the Salesforce record ID that the attachment is associated with.\n- Mandatory when creating or updating an attachment to specify its parent record.\n- Must correspond to an existing Salesforce record ID of a valid object type that supports attachments.\n- Enables querying, filtering, and reporting of attachments based on their parent record.\n- Enforces referential integrity between attachments and parent records.\n- Changes to the parent record, including deletion or modification, can affect the visibility, accessibility, and lifecycle of the attachment.\n- Access to the attachment is governed by the permissions and sharing settings of the parent record.\n\n**Implementation guidance**\n- Ensure the parentId is a valid 15- or 18-character Salesforce record ID, with preference for 18-character IDs to avoid case sensitivity issues.\n- Verify the existence and active status of the parent record before assigning the parentId.\n- Confirm that the parent object type supports attachments to prevent errors during attachment operations.\n- Handle user permissions and sharing settings to ensure appropriate access rights to the parent record and its attachments.\n- Use Salesforce APIs or metadata services to validate the parentId format, object type, and existence.\n- Implement robust error handling for invalid, non-existent, or unauthorized parentId values during attachment creation or updates.\n- Consider the impact of parent record lifecycle events (such as deletion or archiving) on associated attachments.\n\n**Examples**\n- \"0011a000003XYZ123\" (Account record ID)\n- \"0031a000004ABC456\" (Contact record ID)\n- \"0061a000005DEF789\" (Opportunity record ID)\n\n**Important notes**\n- The parentId is critical for maintaining data integrity and ensuring"},"contentType":{"type":"string","description":"The MIME type of the attachment content, specifying the exact format and nature of the file (e.g., \"image/png\", \"application/pdf\", \"text/plain\"). This information is critical for correctly interpreting, processing, and displaying the attachment across different systems and platforms. It enables clients and servers to determine how to handle the file, whether to render it inline, prompt for download, or apply specific processing rules. Accurate contentType values facilitate validation, security scanning, and compatibility checks, ensuring that the attachment is managed appropriately throughout its lifecycle. Properly defined contentType values improve interoperability, user experience, and system security by enabling precise content handling and reducing the risk of errors or vulnerabilities.\n\n**Field behavior**\n- Defines the media type of the attachment to guide processing, rendering, and handling decisions.\n- Used by clients, servers, and intermediaries to determine appropriate actions such as inline display, download prompts, or specialized processing.\n- Supports validation of file format during upload, storage, and download operations to ensure data integrity.\n- Influences security measures including filtering, scanning for malware, and enforcing content policies.\n- Affects user interface elements like preview generation, icon selection, and content categorization.\n- May be used in logging, auditing, and analytics to track content types handled by the system.\n\n**Implementation guidance**\n- Must strictly adhere to the standard MIME type format as defined by IANA (e.g., \"type/subtype\").\n- Should accurately reflect the actual content format to prevent misinterpretation or processing errors.\n- Use generic types such as \"application/octet-stream\" only when the specific type cannot be determined.\n- Include optional parameters (e.g., charset, boundary) when relevant to fully describe the content.\n- Ensure the contentType is set or updated whenever the attachment content changes to maintain consistency.\n- Validate the contentType against the file extension and actual content where feasible to enhance reliability.\n- Consider normalizing the contentType to lowercase to maintain consistency across systems.\n- Avoid relying solely on contentType for security decisions; combine with other metadata and content inspection.\n\n**Examples**\n- \"image/jpeg\"\n- \"application/pdf\"\n- \"text/plain; charset=utf-8\"\n- \"application/vnd.ms-excel\"\n- \"audio/mpeg\"\n- \"video/mp4\"\n- \"application/json\"\n- \"multipart/form-data; boundary=something\"\n- \"application/zip\"\n\n**Important notes**\n- An incorrect, missing, or misleading contentType can lead to improper"},"isPrivate":{"type":"boolean","description":"Indicates whether the attachment is designated as private, restricting its visibility exclusively to users who have been explicitly authorized to access it. This setting is critical for maintaining confidentiality and ensuring that sensitive or proprietary information contained within attachments is accessible only to appropriate personnel. When marked as private, the attachment’s visibility is constrained based on the user’s permissions and the sharing settings of the parent record, thereby reinforcing data security within the Salesforce environment. This field directly controls the scope of access to the attachment, dynamically influencing who can view or interact with it, and must be managed carefully to align with organizational privacy policies and compliance requirements.\n\n**Field behavior**\n- When set to true, the attachment is accessible only to users with explicit authorization, effectively hiding it from unauthorized users.\n- When set to false or omitted, the attachment inherits the visibility of its parent record and is accessible to all users who have access to that record.\n- Directly affects the attachment’s visibility scope and access control within Salesforce.\n- Changes to this field immediately impact user access permissions for the attachment.\n- Does not override broader organizational sharing settings but further restricts access.\n- The privacy setting applies in conjunction with existing record-level and object-level security controls.\n\n**Implementation guidance**\n- Use this field to enforce strict data privacy and protect sensitive attachments in compliance with organizational policies and regulatory requirements.\n- Ensure that user roles, profiles, and sharing rules are properly configured to respect this privacy setting.\n- Validate that the value is a boolean (true or false) to prevent data inconsistencies.\n- Evaluate the implications on existing sharing rules and record-level security before modifying this field.\n- Consider implementing audit logging to track changes to this privacy setting for security and compliance purposes.\n- Test changes in a non-production environment to understand the impact on user access before deploying to production.\n- Communicate changes to relevant stakeholders to avoid confusion or unintended access issues.\n\n**Examples**\n- `true` — The attachment is private and visible only to users explicitly authorized to access it.\n- `false` — The attachment is public within the context of the parent record and visible to all users who have access to that record.\n\n**Important notes**\n- Setting this field to true does not grant access to users who lack permission to view the parent record; it only further restricts access.\n- Modifying this setting can significantly affect user access and should be managed carefully to avoid unintended data exposure or access denial.\n- Some Salesforce editions or configurations may have limitations or specific behaviors regarding attachment privacy"},"description":{"type":"string","description":"A textual description providing additional context, detailed information, and clarifying the content, purpose, or relevance of the attachment within Salesforce. This field enhances user understanding and facilitates easier identification, organization, and management of attachments by serving as descriptive metadata that complements the attachment’s core data without containing the attachment content itself. It supports plain text with special characters, punctuation, and full Unicode, enabling expressive, internationalized, and accessible descriptions. Typically displayed in user interfaces alongside attachment details, this description also plays a crucial role in search, filtering, sorting, and reporting operations, improving overall data discoverability and usability.\n\n**Field behavior**\n- Optional and editable field used to describe the attachment’s nature or relevance.\n- Supports plain text input including special characters, punctuation, and Unicode.\n- Displayed prominently in user interfaces where attachment details appear.\n- Utilized in search, filtering, sorting, and reporting functionalities to enhance data retrieval.\n- Does not contain or affect the actual attachment content or file storage.\n\n**Implementation guidance**\n- Encourage concise yet informative descriptions to aid quick identification.\n- Validate and sanitize input to prevent injection attacks and ensure data integrity.\n- Support full Unicode to accommodate international characters and symbols.\n- Enforce a reasonable maximum length (commonly 255 characters) to maintain performance and UI consistency.\n- Avoid inclusion of sensitive or confidential information unless appropriate security controls are applied.\n- Implement trimming or sanitization routines to maintain clean and consistent data.\n\n**Examples**\n- \"Quarterly financial report for Q1 2024.\"\n- \"Signed contract agreement with client XYZ.\"\n- \"Product brochure for the new model launch.\"\n- \"Meeting notes and action items from April 15, 2024.\"\n- \"Technical specification document for project Alpha.\"\n\n**Important notes**\n- This field contains descriptive metadata only, not the attachment content.\n- Sensitive information should be excluded or protected according to security policies.\n- Changes to this field do not impact the attachment file or its storage.\n- Consistent and accurate descriptions improve attachment management and user experience.\n- Plays a key role in enhancing searchability and filtering within Salesforce.\n\n**Dependency chain**\n- Directly associated with the attachment entity in Salesforce.\n- Often used in conjunction with other metadata fields such as attachment name, type, owner, size, and creation date.\n- Supports comprehensive context when combined with related records and metadata.\n\n**Technical details**\n- Data type: string.\n- Maximum length: typically 255 characters, configurable"}},"description":"The attachment property represents a file or document linked to a Salesforce record, encompassing both the binary content of the file and its associated metadata. It serves as a versatile container for a wide variety of file types—including documents, images, PDFs, spreadsheets, and emails—allowing users to enrich Salesforce records such as emails, cases, opportunities, or any standard or custom objects with supplementary information or evidence. This property supports comprehensive lifecycle management, enabling operations such as uploading new files, retrieving existing attachments, updating file details, and deleting attachments as needed. By maintaining a direct association with the parent Salesforce record through identifiers like ParentId, it ensures accurate linkage and seamless integration within the Salesforce ecosystem. The attachment property facilitates enhanced collaboration, documentation, and data completeness by providing a centralized and manageable way to handle files related to Salesforce data.\n\n**Field behavior**\n- Stores the binary content or a reference to a file linked to a specific Salesforce record.\n- Supports multiple file formats including but not limited to documents, images, PDFs, spreadsheets, and email files.\n- Enables attaching additional context, documentation, or evidence to Salesforce objects.\n- Allows full lifecycle management: create, read, update, and delete operations on attachments.\n- Contains metadata such as file name, MIME content type, file size, and the ID of the parent Salesforce record.\n- Reflects changes immediately in the associated Salesforce record’s file attachments.\n- Maintains a direct association with the parent record through the ParentId field to ensure accurate linkage.\n- Supports integration with Salesforce Files (ContentVersion and ContentDocument) for advanced file management features.\n\n**Implementation guidance**\n- Encode file content using Base64 encoding when transmitting via Salesforce APIs to ensure data integrity.\n- Validate file size against Salesforce limits (commonly up to 25 MB per attachment) and confirm supported file types before upload.\n- Use the ParentId field to correctly associate the attachment with the intended Salesforce record.\n- Implement robust permission checks to ensure only authorized users can upload, view, or modify attachments.\n- Consider leveraging Salesforce Files (ContentVersion and ContentDocument objects) for enhanced file management capabilities, versioning, and sharing features, especially for new implementations.\n- Handle error responses gracefully, including those related to file size limits, unsupported formats, or permission issues.\n- Ensure metadata fields such as file name and content type are accurately populated to facilitate proper handling and display.\n- When updating attachments, confirm that the new content or metadata aligns with organizational policies and compliance requirements.\n- Optimize performance"},"contentVersion":{"type":"object","properties":{"contentDocumentId":{"type":"string","description":"contentDocumentId is the unique identifier for the parent Content Document associated with a specific Content Version in Salesforce. It serves as the fundamental reference that links all versions of the same document, enabling comprehensive version control and efficient document management within the Salesforce ecosystem. This ID is immutable once the Content Version is created, ensuring a stable and consistent connection across all iterations of the document. By maintaining this linkage, Salesforce facilitates seamless navigation between individual document versions and their overarching document entity, supporting streamlined retrieval, updates, collaboration, and organizational workflows. This field is automatically assigned by Salesforce during the creation of a Content Version and is critical for maintaining the integrity and traceability of document versions throughout their lifecycle.\n\n**Field behavior**\n- Represents the unique ID of the parent Content Document to which the Content Version belongs.\n- Acts as the primary link connecting multiple versions of the same document.\n- Immutable after the Content Version record is created, preserving data integrity.\n- Enables navigation from a specific Content Version to its parent Content Document.\n- Automatically assigned and managed by Salesforce during Content Version creation.\n- Essential for maintaining consistent version histories and document relationships.\n\n**Implementation guidance**\n- Use this ID to reference or retrieve the entire document across all its versions.\n- Verify that the ID corresponds to an existing Content Document before usage.\n- Utilize in SOQL queries and API calls to associate Content Versions with their parent document.\n- Avoid modifying this field directly; it is managed internally by Salesforce.\n- Employ this ID to support document lifecycle management, version tracking, and collaboration features.\n- Leverage this field to ensure accurate linkage in integrations and custom development.\n\n**Examples**\n- \"0691t00000XXXXXXAAA\"\n- \"0692a00000YYYYYYBBB\"\n- \"0693b00000ZZZZZZCCC\"\n\n**Important notes**\n- Distinct from the Content Version ID, which identifies a specific version rather than the entire document.\n- Mandatory for any valid Content Version record to ensure proper document association.\n- Cannot be null or empty for Content Version records.\n- Critical for preserving document integrity, version tracking, and enabling collaborative document management.\n- Changes to this ID are not permitted post-creation to avoid breaking version linkages.\n- Plays a key role in Salesforce’s document sharing, access control, and audit capabilities.\n\n**Dependency chain**\n- Requires the existence of a valid Content Document record in Salesforce.\n- Directly related to Content Version records representing different iterations of the same document.\n- Integral"},"title":{"type":"string","description":"The title of the content version serves as the primary name or label assigned to this specific iteration of content within Salesforce. It acts as a key identifier that enables users to quickly recognize, differentiate, and manage various versions of content. The title typically summarizes the subject matter, purpose, or significant details about the content, making it easier to locate and understand at a glance. It should be concise yet sufficiently descriptive to effectively convey the essence and context of the content version. Well-crafted titles improve navigation, searchability, and version tracking within content management workflows, enhancing overall user experience and operational efficiency.\n\n**Field behavior**\n- Functions as the main identifier or label for the content version in user interfaces, listings, and reports.\n- Enables users to distinguish between different versions of the same content efficiently.\n- Reflects the content’s subject, purpose, key attributes, or version status.\n- Should be concise but informative to facilitate quick recognition and comprehension.\n- Commonly displayed in search results, filters, version histories, and content libraries.\n- Supports sorting and filtering operations to streamline content management.\n\n**Implementation guidance**\n- Use clear, descriptive titles that accurately represent the content version’s focus or changes.\n- Avoid special characters or formatting that could cause display, parsing, or processing issues.\n- Keep titles within a reasonable length (typically up to 255 characters) to ensure readability across various UI components and devices.\n- Update the title appropriately when creating new versions to reflect significant updates or revisions.\n- Consider including version numbers, dates, or status indicators to enhance clarity and version tracking.\n- Follow organizational naming conventions and standards to maintain consistency.\n\n**Examples**\n- \"Q2 Financial Report 2024\"\n- \"Product Launch Presentation v3\"\n- \"Employee Handbook - Updated March 2024\"\n- \"Marketing Strategy Overview\"\n- \"Website Redesign Proposal - Final Draft\"\n\n**Important notes**\n- Titles do not need to be globally unique but should be meaningful and contextually relevant.\n- Changing the title does not modify the underlying content data or its integrity.\n- Titles are frequently used in search, filtering, sorting, and reporting operations within Salesforce.\n- Consistent naming conventions and formatting improve content management efficiency and user experience.\n- Consider organizational standards or guidelines when defining title formats.\n- Titles should avoid ambiguity to prevent confusion between similar content versions.\n\n**Dependency chain**\n- Part of the ContentVersion object within Salesforce’s content management framework.\n- Often associated with other metadata fields such as description"},"pathOnClient":{"type":"string","description":"The pathOnClient property specifies the original file path of the content as it exists on the client machine before being uploaded to Salesforce. This path serves as a precise reference to the source location of the file on the user's local system, facilitating identification, auditing, and tracking of the file's origin. It captures either the full absolute path or a relative path depending on the client environment and upload context, accurately reflecting the exact location from which the file was sourced. This information is primarily used for informational, diagnostic, and user interface purposes within Salesforce and does not influence how the file is stored, accessed, or secured in the system.\n\n**Field behavior**\n- Represents the full absolute or relative file path on the client device where the file was stored prior to upload.\n- Automatically populated during file upload processes when the client environment provides this information.\n- Used mainly for reference, auditing, and user interface display to provide context about the file’s origin.\n- May be empty, null, or omitted if the file was not uploaded from a client device or if the path information is unavailable or restricted.\n- Does not influence file storage, retrieval, or access permissions within Salesforce.\n- Retains the original formatting and syntax as provided by the client system at upload time.\n\n**Implementation guidance**\n- Capture the file path exactly as provided by the client system at the time of upload, preserving the original format and case sensitivity.\n- Handle differences in file path syntax across operating systems (e.g., backslashes for Windows, forward slashes for Unix/Linux/macOS).\n- Treat this property strictly as metadata for informational purposes; avoid using it for security, access control, or file retrieval logic.\n- Sanitize or mask sensitive directory or user information when displaying or logging this path to protect user privacy and comply with data protection regulations.\n- Ensure compliance with relevant privacy laws and organizational policies when storing or exposing client file paths.\n- Consider truncating, normalizing, or encoding paths if necessary to conform to platform length limits, display constraints, or to prevent injection vulnerabilities.\n- Provide clear documentation to users and administrators about the non-security nature of this property to prevent misuse.\n\n**Examples**\n- \"C:\\\\Users\\\\JohnDoe\\\\Documents\\\\Report.pdf\"\n- \"/home/janedoe/projects/presentation.pptx\"\n- \"Documents/Invoices/Invoice123.pdf\"\n- \"Desktop/Project Files/DesignMockup.png\"\n- \"../relative/path/to/file.txt\"\n\n**Important notes**\n- This property does not"},"tagCsv":{"type":"string","description":"A comma-separated string representing the tags associated with the content version in Salesforce. This field enables the assignment of multiple descriptive tags within a single string, facilitating efficient categorization, organization, and enhanced searchability of the content. Tags serve as metadata labels that help users filter, locate, and manage content versions effectively across the platform by grouping related items and supporting advanced search and filtering capabilities. Proper use of this field improves content discoverability, streamlines content management workflows, and supports dynamic content classification.\n\n**Field behavior**\n- Accepts multiple tags separated by commas, typically without spaces, though trimming whitespace is recommended.\n- Tags are used to categorize, organize, and improve the discoverability of content versions.\n- Tags can be added, updated, or removed by modifying the entire string value.\n- Duplicate tags within the string should be avoided to maintain clarity and effectiveness.\n- Tags are generally treated as case-insensitive for search purposes but stored exactly as entered.\n- Changes to this field immediately affect how content versions are indexed and retrieved in search operations.\n\n**Implementation guidance**\n- Ensure individual tags do not contain commas to prevent parsing errors.\n- Validate the tagCsv string for invalid characters, excessive length, and proper formatting.\n- When updating tags, replace the entire string to accurately reflect the current tag set.\n- Trim leading and trailing whitespace from each tag before processing or storage.\n- Utilize this field to enhance search filters, categorization, and content management workflows.\n- Implement application-level checks to prevent duplicate or meaningless tags.\n- Consider standardizing tag formats (e.g., lowercase) to maintain consistency across records.\n\n**Examples**\n- \"marketing,Q2,presentation\"\n- \"urgent,clientA,confidential\"\n- \"releaseNotes,v1.2,approved\"\n- \"finance,yearEnd,reviewed\"\n\n**Important notes**\n- The maximum length of the tagCsv string is subject to Salesforce field size limitations.\n- Tags should be meaningful, consistent, and standardized to maximize their utility.\n- Modifications to tags can impact content visibility and access if used in filtering or permission logic.\n- This field does not enforce uniqueness of tags; application-level logic should handle duplicate prevention.\n- Tags are stored as-is; any normalization or case conversion must be handled externally.\n- Improper formatting or invalid characters may cause errors or unexpected behavior in search and filtering.\n\n**Dependency chain**\n- Closely related to contentVersion metadata and search/filtering functionality.\n- May integrate with Salesforce"},"contentLocation":{"type":"string","description":"contentLocation specifies the precise storage location of the content file associated with the Salesforce ContentVersion record. It identifies where the actual file data resides—whether within Salesforce's internal storage infrastructure, an external content management system, or a linked external repository—and governs how the content is accessed, retrieved, and managed throughout its lifecycle. This field plays a crucial role in content delivery, access permissions, and integration workflows by clearly indicating the source location of the file data. Properly setting and interpreting contentLocation ensures that content is handled correctly according to its storage context, enabling seamless access when stored internally or facilitating robust integration and synchronization with third-party external systems.\n\n**Field behavior**\n- Determines the origin and storage context of the content file linked to the ContentVersion record.\n- Influences the mechanisms and protocols used for accessing, retrieving, and managing the content.\n- Typically assigned automatically by Salesforce based on the upload method or integration configuration.\n- Common values include 'S' for Salesforce internal storage, 'E' for external storage systems, and 'L' for linked external locations.\n- Often read-only to preserve data integrity and prevent unauthorized or unintended modifications.\n- Alterations to this field can impact content visibility, sharing settings, and delivery workflows.\n\n**Implementation guidance**\n- Utilize this field to programmatically distinguish between internally stored content and externally managed files.\n- When integrating with external repositories, ensure contentLocation accurately reflects the external storage to enable correct content handling and retrieval.\n- Avoid manual updates unless supported by Salesforce documentation and accompanied by a comprehensive understanding of the consequences.\n- Validate the contentLocation value prior to processing or delivering content to guarantee proper routing and access control.\n- Confirm that external storage systems are correctly configured, authenticated, and authorized to maintain uninterrupted access.\n- Monitor this field closely during content migration or integration activities to prevent data inconsistencies or access disruptions.\n\n**Examples**\n- 'S' — Content physically stored within Salesforce’s internal storage infrastructure.\n- 'E' — Content residing in an external system integrated with Salesforce, such as a third-party content repository.\n- 'L' — Content located in a linked external location, representing specialized or less common storage configurations.\n\n**Important notes**\n- Manual modifications to contentLocation can lead to broken links, access failures, or data inconsistencies.\n- Salesforce generally manages this field automatically to uphold consistency and reliability.\n- The contentLocation value directly influences content delivery methods and access control policies.\n- External content locations require proper configuration, authentication, and permissions to function correctly"}},"description":"The contentVersion property serves as the unique identifier for a specific iteration of a content item within the Salesforce platform's content management system. It enables precise tracking, retrieval, and management of individual versions or revisions of documents, files, or digital assets. Each contentVersion corresponds to a distinct, immutable state of the content at a given point in time, facilitating robust version control, content integrity, and auditability. When a content item is created or modified, Salesforce automatically generates a new contentVersion to capture those changes, allowing users and systems to access, compare, or revert to previous versions as necessary. This property is fundamental for workflows involving content updates, approvals, compliance tracking, and historical content analysis.\n\n**Field behavior**\n- Uniquely identifies a single, immutable version of a content item within Salesforce.\n- Differentiates between multiple revisions or updates of the same content asset.\n- Automatically created or incremented by Salesforce upon content creation or modification.\n- Used in API operations to retrieve, reference, update metadata, or manage specific content versions.\n- Remains constant once assigned; any content changes result in a new contentVersion record.\n- Supports audit trails by preserving historical versions of content.\n\n**Implementation guidance**\n- Ensure synchronization of contentVersion values with Salesforce to maintain consistency across integrated systems.\n- Utilize this property for version-specific operations such as downloading content, updating metadata, or deleting particular versions.\n- Validate contentVersion identifiers against Salesforce API standards to avoid errors or mismatches.\n- Implement comprehensive error handling for scenarios where contentVersion records are missing, deprecated, or inaccessible due to permission restrictions.\n- Consider access control differences at the version level, as visibility and permissions may vary between versions.\n- Leverage contentVersion in workflows requiring version comparison, rollback, or approval processes.\n\n**Examples**\n- \"0681t000000XyzA\" (initial version of a document)\n- \"0681t000000XyzB\" (a subsequent, updated version reflecting content changes)\n- \"0681t000000XyzC\" (a further revised version incorporating additional edits)\n\n**Important notes**\n- contentVersion is distinct from contentDocumentId; contentVersion identifies a specific version, whereas contentDocumentId refers to the overall document entity.\n- Proper versioning is critical for maintaining content integrity, supporting audit trails, and ensuring regulatory compliance.\n- Access permissions and visibility can differ between content versions, potentially affecting user access and operations.\n- The contentVersion ID typically begins with the prefix"}}},"File":{"type":"object","description":"**CRITICAL: This object is REQUIRED for all file-based import adaptor types.**\n\n**When to include this object**\n\n✅ **MUST SET** when `adaptorType` is one of:\n- `S3Import`\n- `FTPImport`\n- `AS2Import`\n\n❌ **DO NOT SET** for non-file-based imports like:\n- `SalesforceImport`\n- `NetSuiteImport`\n- `HTTPImport`\n- `MongodbImport`\n- `RDBMSImport`\n\n**Minimum required fields**\n\nFor most file imports, you need at minimum:\n- `fileName`: The output file name (supports Handlebars like `{{timestamp}}`)\n- `type`: The file format (json, csv, xml, xlsx)\n- `skipAggregation`: Usually `false` for standard imports\n\n**Example (S3 Import)**\n\n```json\n{\n  \"file\": {\n    \"fileName\": \"customers-{{timestamp}}.json\",\n    \"skipAggregation\": false,\n    \"type\": \"json\"\n  },\n  \"s3\": {\n    \"region\": \"us-east-1\",\n    \"bucket\": \"my-bucket\",\n    \"fileKey\": \"customers-{{timestamp}}.json\"\n  },\n  \"adaptorType\": \"S3Import\"\n}\n```","properties":{"fileName":{"type":"string","description":"**REQUIRED for file-based imports.**\n\nThe name of the file to be created/written. Supports Handlebars expressions for dynamic naming.\n\n**Default behavior**\n- When the user does not specify a file naming convention, **always default to timestamped filenames** using `{{timestamp}}` (e.g., `\"items-{{timestamp}}.csv\"`). This ensures each run produces a unique file and avoids overwriting previous exports.\n- Only use a fixed filename (without timestamp) if the user explicitly requests overwriting or a fixed name.\n\n**Common patterns**\n- `\"data-{{timestamp}}.json\"` - Timestamped JSON file (DEFAULT — use this pattern when not specified)\n- `\"export-{{date}}.csv\"` - Date-stamped CSV file\n- `\"{{recordType}}-backup.xml\"` - Dynamic record type naming\n\n**Examples**\n- `\"customers-{{timestamp}}.json\"`\n- `\"orders-export.csv\"`\n- `\"inventory-{{date}}.xlsx\"`\n\n**Important**\n- For S3 imports, this should typically match the `s3.fileKey` value\n- For FTP imports, this should typically match the `ftp.fileName` value"},"skipAggregation":{"type":"boolean","description":"Controls whether records are aggregated into a single file or processed individually.\n\n**Values**\n- `false` (DEFAULT): Records are aggregated into a single output file\n- `true`: Each record is written as a separate file\n\n**When to use**\n- **`false`**: Standard batch imports where all records go into one file\n- **`true`**: When each record needs its own file (e.g., individual documents)\n\n**Default:** `false`","default":false},"type":{"type":"string","enum":["json","csv","xml","xlsx","filedefinition"],"description":"**REQUIRED for file-based imports.**\n\nThe format of the output file.\n\n**Options**\n- `\"json\"`: JSON format (most common for API data)\n- `\"csv\"`: Comma-separated values (tabular data)\n- `\"xml\"`: XML format\n- `\"xlsx\"`: Excel spreadsheet format\n- `\"filedefinition\"`: Custom file definition format\n\n**Selection guidance**\n- Use `\"json\"` for structured/nested data, API payloads\n- Use `\"csv\"` for flat tabular data, spreadsheet-like records\n- Use `\"xml\"` for XML-based integrations, SOAP services\n- Use `\"xlsx\"` for Excel-compatible outputs"},"encoding":{"type":"string","enum":["utf8","win1252","utf-16le","gb18030","macroman","iso88591","shiftjis"],"description":"Character encoding for the output file.\n\n**Default:** `\"utf8\"`\n\n**When to change**\n- `\"win1252\"`: Legacy Windows systems\n- `\"utf-16le\"`: Unicode with BOM requirements\n- `\"gb18030\"`: Chinese character sets\n- `\"shiftjis\"`: Japanese character sets","default":"utf8"},"delete":{"type":"boolean","description":"Whether to delete the source file after successful import.\n\n**Values**\n- `true`: Delete source file after processing\n- `false`: Keep source file\n\n**Default:** `false`","default":false},"compressionFormat":{"type":"string","enum":["gzip","zip"],"description":"Compression format for the output file.\n\n**Options**\n- `\"gzip\"`: GZIP compression (.gz)\n- `\"zip\"`: ZIP compression (.zip)\n\n**When to use**\n- Large files that benefit from compression\n- When target system expects compressed files"},"backupPath":{"type":"string","description":"Path where backup copies of files should be stored.\n\n**Examples**\n- `\"backup/\"` - Relative backup folder\n- `\"/archive/2024/\"` - Absolute backup path"},"purgeInternalBackup":{"type":"boolean","description":"Whether to purge internal backup copies after successful processing.\n\n**Default:** `false`","default":false},"batchSize":{"type":"integer","description":"Number of records to include per batch/file when processing large datasets.\n\n**When to use**\n- Large imports that need to be split into multiple files\n- When target system has file size limitations\n\n**Note**\n- Only applicable when `skipAggregation` is `true`"},"encrypt":{"type":"boolean","description":"Whether to encrypt the output file.\n\n**Values**\n- `true`: Encrypt the file (requires PGP configuration)\n- `false`: No encryption\n\n**Default:** `false`","default":false},"csv":{"type":"object","description":"CSV-specific configuration. Only used when `type` is `\"csv\"`.","properties":{"rowDelimiter":{"type":"string","description":"Character(s) used to separate rows. Default is newline.","default":"\n"},"columnDelimiter":{"type":"string","description":"Character(s) used to separate columns. Default is comma.","default":","},"includeHeader":{"type":"boolean","description":"Whether to include header row with column names.","default":true},"wrapWithQuotes":{"type":"boolean","description":"Whether to wrap field values in quotes.","default":false},"replaceTabWithSpace":{"type":"boolean","description":"Replace tab characters with spaces.","default":false},"replaceNewlineWithSpace":{"type":"boolean","description":"Replace newline characters with spaces within fields.","default":false},"truncateLastRowDelimiter":{"type":"boolean","description":"Remove trailing row delimiter from the file.","default":false}}},"json":{"type":"object","description":"JSON-specific configuration. Only used when `type` is `\"json\"`.","properties":{"resourcePath":{"type":"string","description":"JSONPath expression to locate records within the JSON structure."}}},"xml":{"type":"object","description":"XML-specific configuration. Only used when `type` is `\"xml\"`.","properties":{"resourcePath":{"type":"string","description":"XPath expression to locate records within the XML structure."}}},"fileDefinition":{"type":"object","description":"Configuration settings for parsing files using a predefined file definition. This object enables processing of complex, non-standard, or proprietary file formats that require specialized parsing logic beyond what the standard parsers (CSV, JSON, XML, etc.) can handle.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"filedefinition\". This approach is required for properly handling:\n- Legacy or proprietary file formats with complex structures\n- Fixed-width text files where field positions are defined by character positions\n- Electronic Data Interchange (EDI) documents (X12, EDIFACT, etc.)\n- Multi-record type files where different lines have different formats\n- Files requiring complex preprocessing or custom parsing logic\n\n**File definition characteristics**\n\n- **Custom Parsing Rules**: Applies predefined parsing logic to complex file formats\n- **Reusable Configurations**: References externally defined parsing rules that can be reused\n- **Complex Format Support**: Handles formats that standard parsers cannot process\n- **Specialized Processing**: Often used for industry-specific or legacy formats\n\n**Implementation strategy for ai agents**\n\n1. **Format Analysis**:\n    - Determine if the file format is standard (CSV, JSON, XML) or requires custom parsing\n    - Check if the format follows industry standards like EDI, SWIFT, or fixed-width\n    - Assess if there are multiple record types within the same file\n    - Identify if specialized logic is needed to interpret the file structure\n\n2. **File Definition Selection**:\n    - Verify that a suitable file definition has already been created in the system\n    - Check if existing file definitions match the format requirements\n    - Confirm the file definition ID from system administrators if needed\n    - Ensure the file definition is compatible with the export's needs\n","properties":{"_fileDefinitionId":{"type":"string","format":"objectId","description":"Reference to the file definition resource."}}}}},"FileSystem":{"type":"object","description":"Configuration for FileSystem imports","properties":{"directoryPath":{"type":"string","description":"Directory path to read files from (required)"}},"required":["directoryPath"]},"AiAgent":{"type":"object","description":"AI Agent configuration used by both AiAgentImport and GuardrailImport (ai_agent type).\n\nConfigures which AI provider and model to use, along with instructions, parameter\ntuning, output format, and available tools. Two providers are supported:\n\n- **openai**: OpenAI models (GPT-4, GPT-4o, GPT-4o-mini, GPT-5, etc.)\n- **gemini**: Google Gemini models via LiteLLM proxy\n\nA `_connectionId` is optional (BYOK). If not provided on the parent import,\nplatform-managed credentials are used.\n","properties":{"provider":{"type":"string","enum":["openai","gemini"],"default":"openai","description":"AI provider to use.\n\n- **openai**: Uses OpenAI Responses API. Configure via the `openai` object.\n- **gemini**: Uses Google Gemini via LiteLLM. Configure via `litellm` with\n  Gemini-specific overrides in `litellm._overrides.gemini`.\n"},"openai":{"type":"object","description":"OpenAI-specific configuration. Used when `provider` is \"openai\".\n","properties":{"instructions":{"type":"string","maxLength":51200,"description":"System prompt that defines the AI agent's behavior, goals, and constraints.\nMaximum 50 KB.\n"},"model":{"type":"string","description":"OpenAI model identifier.\n"},"reasoning":{"type":"object","description":"Controls depth of reasoning for complex tasks.\n","properties":{"effort":{"type":"string","enum":["minimal","low","medium","high"],"description":"How much reasoning effort the model should invest"},"summary":{"type":"string","enum":["concise","auto","detailed"],"description":"Level of detail in reasoning summaries"}}},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"Sampling temperature. Higher values (e.g. 1.5) produce more creative output,\nlower values (e.g. 0.2) produce more focused and deterministic output.\n"},"topP":{"type":"number","minimum":0.1,"maximum":1,"description":"Nucleus sampling parameter"},"maxOutputTokens":{"type":"number","minimum":100,"maximum":128000,"default":1000,"description":"Maximum number of tokens in the model's response"},"serviceTier":{"type":"string","enum":["auto","default","priority"],"default":"default","description":"OpenAI service tier. \"priority\" provides higher rate limits and\nlower latency at increased cost.\n"},"output":{"type":"object","description":"Output format configuration","properties":{"format":{"type":"object","description":"Controls the structure of the model's output.\n","properties":{"type":{"type":"string","enum":["text","json_schema","blob"],"default":"text","description":"Output format type.\n\n- **text**: Free-form text response\n- **json_schema**: Structured JSON conforming to a schema\n- **blob**: Binary data output\n"},"name":{"type":"string","description":"Name for the output format (used with json_schema)"},"strict":{"type":"boolean","default":false,"description":"Whether to enforce strict schema validation on output"},"jsonSchema":{"type":"object","description":"JSON Schema for structured output. Required when `format.type` is \"json_schema\".\n","properties":{"type":{"type":"string","enum":["object","array","string","number","integer","boolean"]},"properties":{"type":"object","additionalProperties":true},"required":{"type":"array","items":{"type":"string"}},"additionalproperties":{"type":"boolean"}}}}},"verbose":{"type":"string","enum":["low","medium","high"],"default":"medium","description":"Level of detail in the model's response"}}},"tools":{"type":"array","description":"Tools available to the AI agent during processing.\n","items":{"type":"object","properties":{"type":{"type":"string","enum":["web_search","mcp","image_generation","tool"],"description":"Type of tool.\n\n- **web_search**: Search the web for information\n- **mcp**: Connect to an MCP server for additional tools\n- **image_generation**: Generate images\n- **tool**: Reference a Celigo Tool resource\n"},"webSearch":{"type":"object","description":"Web search configuration (empty object to enable)"},"imageGeneration":{"type":"object","description":"Image generation configuration","properties":{"background":{"type":"string","enum":["transparent","opaque"]},"quality":{"type":"string","enum":["low","medium","high"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024"]},"outputFormat":{"type":"string","enum":["png","webp","jpeg"]}}},"mcp":{"type":"object","description":"MCP server tool configuration","properties":{"_mcpConnectionId":{"type":"string","format":"objectId","description":"Connection to the MCP server"},"allowedTools":{"type":"array","description":"Specific tools to allow from the MCP server (all if omitted)","items":{"type":"string"}}}},"tool":{"type":"object","description":"Reference to a Celigo Tool resource.\n","properties":{"_toolId":{"type":"string","format":"objectId","description":"Reference to the Tool resource"},"overrides":{"type":"object","description":"Per-agent overrides for the tool's internal resources"}}}}}}}},"litellm":{"type":"object","description":"LiteLLM proxy configuration. Used when `provider` is \"gemini\".\n\nLiteLLM provides a unified interface to multiple AI providers.\nGemini-specific settings are in `_overrides.gemini`.\n","properties":{"model":{"type":"string","description":"LiteLLM model identifier"},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"Sampling temperature"},"maxCompletionTokens":{"type":"number","minimum":100,"maximum":128000,"default":1000,"description":"Maximum number of tokens in the response"},"topP":{"type":"number","minimum":0.1,"maximum":1,"description":"Nucleus sampling parameter"},"seed":{"type":"number","description":"Random seed for reproducible outputs"},"responseFormat":{"type":"object","description":"Output format configuration","properties":{"type":{"type":"string","enum":["text","json_schema","blob"],"default":"text"},"name":{"type":"string"},"strict":{"type":"boolean","default":false},"jsonSchema":{"type":"object","properties":{"type":{"type":"string","enum":["object","array","string","number","integer","boolean"]},"properties":{"type":"object","additionalProperties":true},"required":{"type":"array","items":{"type":"string"}},"additionalProperties":{"type":"boolean"}}}}},"_overrides":{"type":"object","description":"Provider-specific overrides","properties":{"gemini":{"type":"object","description":"Gemini-specific configuration overrides.\n","properties":{"systemInstruction":{"type":"string","maxLength":51200,"description":"System instruction for Gemini models. Equivalent to OpenAI's `instructions`.\nMaximum 50 KB.\n"},"tools":{"type":"array","description":"Gemini-specific tools","items":{"type":"object","properties":{"type":{"type":"string","enum":["googleSearch","urlContext","fileSearch","mcp","tool"],"description":"Type of Gemini tool.\n\n- **googleSearch**: Google Search grounding\n- **urlContext**: URL content retrieval\n- **fileSearch**: Search uploaded files\n- **mcp**: Connect to an MCP server\n- **tool**: Reference a Celigo Tool resource\n"},"googleSearch":{"type":"object","description":"Google Search configuration (empty object to enable)"},"urlContext":{"type":"object","description":"URL context configuration (empty object to enable)"},"fileSearch":{"type":"object","properties":{"fileSearchStoreNames":{"type":"array","items":{"type":"string"}}}},"mcp":{"type":"object","properties":{"_mcpConnectionId":{"type":"string","format":"objectId"},"allowedTools":{"type":"array","items":{"type":"string"}}}},"tool":{"type":"object","properties":{"_toolId":{"type":"string","format":"objectId"},"overrides":{"type":"object"}}}}}},"responseModalities":{"type":"array","description":"Response output modalities","items":{"type":"string","enum":["text","image"]},"default":["text"]},"topK":{"type":"number","description":"Top-K sampling parameter for Gemini"},"thinkingConfig":{"type":"object","description":"Controls Gemini's extended thinking capabilities","properties":{"includeThoughts":{"type":"boolean","description":"Whether to include thinking steps in the response"},"thinkingBudget":{"type":"number","minimum":100,"maximum":4000,"description":"Maximum tokens allocated for thinking"},"thinkingLevel":{"type":"string","enum":["minimal","low","medium","high"]}}},"imageConfig":{"type":"object","description":"Gemini image generation configuration","properties":{"aspectRatio":{"type":"string","enum":["1:1","2:3","3:2","3:4","4:3","4:5","5:4","9:16","16:9","21:9"]},"imageSize":{"type":"string","enum":["1K","2K","4k"]}}},"mediaResolution":{"type":"string","enum":["low","medium","high"],"description":"Resolution for media inputs (images, video)"}}}}}}}}},"Guardrail":{"type":"object","description":"Configuration for GuardrailImport adaptor type.\n\nGuardrails are safety and compliance checks that can be applied to data\nflowing through integrations. Three types of guardrails are supported:\n\n- **ai_agent**: Uses an AI model to evaluate data against custom instructions\n- **pii**: Detects and optionally masks personally identifiable information\n- **moderation**: Checks content against moderation categories (e.g. `hate`, `violence`, `harassment`, `sexual`, `self_harm`, `illicit`). Use category names exactly as listed in `moderation.categories` — e.g. `hate` (not \"hate speech\"), `violence` (not \"violent content\").\n\nGuardrail imports do not require a `_connectionId` (unless using BYOK for `ai_agent` type).\n","properties":{"type":{"type":"string","enum":["ai_agent","pii","moderation"],"description":"The type of guardrail to apply.\n\n- **ai_agent**: Evaluate data using an AI model with custom instructions.\n  Requires the `aiAgent` sub-configuration.\n- **pii**: Detect personally identifiable information in data.\n  Requires the `pii` sub-configuration with at least one entity type.\n- **moderation**: Check content for harmful categories.\n  Requires the `moderation` sub-configuration with at least one category.\n"},"confidenceThreshold":{"type":"number","minimum":0,"maximum":1,"default":0.7,"description":"Confidence threshold for guardrail detection (0 to 1).\n\nOnly detections with confidence at or above this threshold will be flagged.\nLower values catch more potential issues but may increase false positives.\n"},"aiAgent":{"$ref":"#/components/schemas/AiAgent"},"pii":{"type":"object","description":"Configuration for PII (Personally Identifiable Information) detection.\n\nRequired when `guardrail.type` is \"pii\". At least one entity type must be specified.\n","properties":{"entities":{"type":"array","description":"PII entity types to detect in the data.\n\nAt least one entity must be specified when using PII guardrails.\n","items":{"type":"string","enum":["credit_card_number","card_security_code_cvv_cvc","cryptocurrency_wallet_address","date_and_time","email_address","iban_code","bic_swift_bank_identifier_code","ip_address","location","medical_license_number","national_registration_number","persons_name","phone_number","url","us_bank_account_number","us_drivers_license","us_itin","us_passport_number","us_social_security_number","uk_nhs_number","uk_national_insurance_number","spanish_nif","spanish_nie","italian_fiscal_code","italian_drivers_license","italian_vat_code","italian_passport","italian_identity_card","polish_pesel","finnish_personal_identity_code","singapore_nric_fin","singapore_uen","australian_abn","australian_acn","australian_tfn","australian_medicare","indian_pan","indian_aadhaar","indian_vehicle_registration","indian_voter_id","indian_passport","korean_resident_registration_number"]}},"mask":{"type":"boolean","default":false,"description":"Whether to mask detected PII values in the output.\n\nWhen true, detected PII is replaced with masked values.\nWhen false, PII is only flagged without modification.\n"}}},"moderation":{"type":"object","description":"Configuration for content moderation.\n\nRequired when `guardrail.type` is \"moderation\". At least one category must be specified.\n","properties":{"categories":{"type":"array","description":"Content moderation categories to check for.\n\nAt least one category must be specified when using moderation guardrails.\n","items":{"type":"string","enum":["sexual","sexual_minors","hate","hate_threatening","harassment","harassment_threatening","self_harm","self_harm_intent","self_harm_instructions","violence","violence_graphic","illicit","illicit_violent"]}}}}},"required":["type"]},"OneToMany":{"type":"boolean","description":"Controls whether the resource treats child records within parent records as the primary data units.\n\n**Important: this is not for specifying where records are in an api response**\n\nIf you need to tell an export where to find the array of records in the HTTP response\nbody (e.g. \"the records are at data.items\"), use `http.response.resourcePath` instead.\n`oneToMany` serves a completely different purpose — it operates on records that have\nalready been extracted from the response.\n\n**What oneToMany actually does**\n\nWhen set to true, this field fundamentally changes how record data is processed:\n- The system will \"unwrap\" nested child records from their parent containers\n- Each child record becomes a separate output record for downstream processing\n- The pathToMany field must be set to indicate where these child records are located\n- Parent record fields can still be accessed via a special \"parent\" context\n\nThis is typically used on **lookup exports** (isLookup: true) or **imports** where\nthe incoming records contain nested arrays that need to be fanned out.\n\nCommon scenarios for enabling this option:\n- Processing order line items individually from an order export\n- Handling invoice line items from an invoice export\n- Processing individual transaction lines from journal entries\n- Extracting address records from customer exports\n\nThis setting applies for the duration of the current flow step only and does not affect\nhow data is stored or structured in other flow steps.\n\nIf false (default), the resource processes each top-level record as a single unit.\n","default":false},"PathToMany":{"type":"string","description":"Specifies the JSON path to child records when oneToMany mode is enabled.\n\nThis field is only used when oneToMany is set to true. It defines the exact location\nof child records within the parent record structure using dot notation:\n\n- Simple path: \"items\" for a direct child array field\n- Nested path: \"lines.lineItems\" for a more deeply nested array\n- Multi-level: \"details.items.subitems\" for deeply nested structures\n\nThe system uses this path to:\n- Locate the array of child records within each parent record\n- Extract each array element as a separate record for processing\n- Make both the child record data and parent context available to downstream steps\n\nImportant considerations:\n- The path must point to an array field\n- For row-based data (i.e. where Celigo models this via an array or arrays of objects), this field is not required\n- If the path is invalid or doesn't exist, the resource will report success but process zero records\n- Maximum path depth: 10 levels\n\nThis field must contain a valid JSON path expression using dot notation.\n"},"Filter":{"type":"object","description":"Configuration for selectively processing records based on specified criteria. This object enables\nprecise control over which items are included or excluded from processing operations.\n\n**Filter behavior**\n\nWhen configured, the filter is applied before processing begins:\n- Items that match the filter criteria are processed\n- Items that don't match are completely skipped\n- No partial processing is performed\n\n**Implementation approaches**\n\nThere are two distinct filtering mechanisms available:\n\n**Rule-Based Filtering (`type: \"expression\"`)**\n- **Best For**: Common filtering patterns based on standard attributes\n- **Capabilities**: Filter by names, values, dates, numerical ranges, text patterns\n- **Advantages**: Declarative, no coding required, consistent performance\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear, static criteria for selection\n\n**Script-Based Filtering (`type: \"script\"`)**\n- **Best For**: Complex logic, dynamic criteria, or business rules\n- **Capabilities**: Full programmatic control, access to complete metadata\n- **Advantages**: Maximum flexibility, can implement any filtering logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Simple rules aren't sufficient or logic needs to be dynamic\n","properties":{"type":{"type":"string","description":"Determines which filtering mechanism to use. This choice affects which properties\nmust be configured and how filtering logic is implemented.\n\n**Available types**\n\n**Rule-Based Filtering (`\"expression\"`)**\n- **Required Config**: The `expression` object with rule definitions\n- **Behavior**: Evaluates declarative rules against item attributes\n- **Best For**: Common patterns like name matching, date ranges, value limits\n- **Advantages**: Simpler to configure, no custom code required\n\n**Script-Based Filtering (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to determine which items to process\n- **Best For**: Complex conditions, business logic, dynamic criteria\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard filtering needs (name, size, date), use `\"expression\"`\n2. For complex logic or conditions not covered by expressions, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based filtering. This object enables filtering\nitems based on common attributes without requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define filtering rules that\ncan match against item attributes like name, type, value, date, and other properties.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Rules can be combined with AND/OR logic\n- Each rule can check a specific attribute\n- Multiple conditions can be applied (ranges, pattern matching, exact matches)\n\n**Common filter patterns**\n\n1. **Pattern matching**: Using wildcards like `*` and `?`\n2. **Value range filtering**: Numbers between min and max values\n3. **Date range filtering**: Items created/modified within specific time ranges\n4. **Status checking**: Items with specific status values or properties\n\nFor AI agents: Rule-based filtering should be your first choice when the filtering criteria\ncan be expressed in terms of standard attributes. Only use script-based filtering when\nmore complex logic is required.\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"1\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"1\"\nfor current implementations.\n","enum":["1"]},"rules":{"type":"array","description":"Expression array defining filter conditions using prefix notation. The first element is the operator,\nfollowed by its operands which may themselves be nested expression arrays.\n\nThe rule expression follows this pattern:\n- First element: Operator name (string)\n- Remaining elements: Operands for that operator (values or nested expressions)\n\n**Expression structure**\n\nFilter expressions use a prefix notation where operators appear before their operands:\n```\n[operator, operand1, operand2, ...]\n```\n\n**Comparison Operators**\n- `\"equals\"`: Exact match (equals)\n- `\"notequals\"`: Not equal to value (not equals)\n- `\"greaterthan\"`: Value is greater than specified value (is greater than)\n- `\"greaterthanequals\"`: Value is greater than or equal to specified value (is greater than or equals)\n- `\"lessthan\"`: Value is less than specified value (is less than)\n- `\"lessthanequals\"`: Value is less than or equal to specified value (is less than or equals)\n- `\"startswith\"`: String starts with specified prefix (starts with)\n- `\"endswith\"`: String ends with specified suffix (ends with)\n- `\"contains\"`: String contains specified substring (contains)\n- `\"doesnotcontain\"`: String does not contain specified substring (does not contain)\n- `\"isempty\"`: Field is empty or null (is empty)\n- `\"isnotempty\"`: Field contains a value (is not empty)\n- `\"matches\"`: Matches specified pattern (matches)\n\n**Logical Operators**\n- `\"and\"`: All conditions must be true\n- `\"or\"`: At least one condition must be true\n- `\"not\"`: Negates the condition\n\n**Field Access and Type Conversion**\n- `\"extract\"`: Access a field from the item by name\n- `\"settings\"`: Access a custom setting from the flow, flow step, or integration configuration\n- `\"boolean\"`: Convert value to Boolean type\n- `\"epochtime\"`: Convert value to Epoch Time (Unix timestamp)\n- `\"number\"`: Convert value to Number type\n- `\"string\"`: Convert value to String type\n\n**Field Access Details**\n\n**Using `extract` to access record fields:**\n- Retrieves values from the current record being processed\n- Can access nested properties using dot notation (e.g., `\"customer.email\"`)\n- Returns the raw field value which may need type conversion\n\n**Using `settings` to access configuration values:**\n- Retrieves values from the integration's configuration settings\n- Supports different scopes with prefix notation:\n  - `flow.settingName`: Access flow-level settings\n  - `export.settingName`: Access export-level settings\n  - `import.settingName`: Access import-level settings\n  - `integration.settingName`: Access integration-level settings\n- Useful for dynamic filtering based on configuration\n\n**Field Transformations**\n- `\"lowercase\"`: Convert string to lowercase\n- `\"uppercase\"`: Convert string to uppercase\n- `\"ceiling\"`: Round number up to the nearest integer\n- `\"floor\"`: Round number down to the nearest integer\n- `\"abs\"`: Get absolute value of a number\n\nType conversion operators are often necessary when comparing extracted field values against literals or when the field type doesn't match the comparison operator's expected type. For example:\n\n```json\n[\n  \"equals\",\n  [\n    \"number\",  // Convert to number before comparison\n    [\n      \"extract\",\n      \"quantity\"\n    ]\n  ],\n  100\n]\n```\n\nExample with datetime conversion:\n```json\n[\n  \"greaterthan\",\n  [\n    \"epochtime\",  // Convert to Unix timestamp before comparison\n    [\n      \"extract\",\n      \"createdDate\"\n    ]\n  ],\n  1609459200000  // January 1, 2021 as Unix timestamp in milliseconds\n]\n```\n\nExample with transformations:\n```json\n[\n  \"and\",\n  [\n    \"matches\",\n    [\n      \"lowercase\",  // Convert to lowercase before matching\n      [\n        \"string\",\n        [\n          \"extract\",\n          \"categories\"\n        ]\n      ]\n    ],\n    \"netsuite\"\n  ],\n  [\n    \"notequals\",\n    [\n      \"string\",\n      [\n        \"extract\",\n        \"recurrence.pattern.type\"\n      ]\n    ],\n    \"\"\n  ]\n]\n```\n\nExample comparing a record field with a flow setting:\n```json\n[\n  \"equals\",\n  [\n    \"string\",\n    [\n      \"extract\",\n      \"trantype\"\n    ]\n  ],\n  [\n    \"string\",\n    [\n      \"settings\",\n      \"flow.trantype\"\n    ]\n  ]\n]\n```\n\n**Examples**\n\nExample 1: Status field is not equal to \"cancelled\"\n```json\n[\n  \"notequals\",\n  [\n    \"extract\",\n    \"status\"\n  ],\n  \"cancelled\"\n]\n```\n\nExample 2: Filename starts with \"HC\"\n```json\n[\n  \"startswith\",\n  [\n    \"extract\",\n    \"filename\"\n  ],\n  \"HC\"\n]\n```\n\nExample 3: Amount is greater than 100\n```json\n[\n  \"greaterthan\",\n  [\n    \"number\",\n    [\n      \"extract\",\n      \"amount\"\n    ]\n  ],\n  100\n]\n```\n\nExample 4: Order date is after January 1, 2023\n```json\n[\n  \"greaterthan\",\n  [\n    \"extract\",\n    \"orderDate\"\n  ],\n  \"2023-01-01T00:00:00Z\"\n]\n```\n\nExample 5: Category contains any of [\"Urgent\", \"High Priority\"]\n```json\n[\n  \"anyof\",\n  [\n    \"extract\",\n    \"category\"\n  ],\n  [\"Urgent\", \"High Priority\"]\n]\n```\n","items":{"oneOf":[{"title":"String","type":"string"},{"title":"Number","type":"number"},{"title":"Boolean","type":"boolean"},{"title":"Object","type":"object"},{"title":"Array","type":"array"}]}}}},"script":{"type":"object","description":"Configuration for programmable script-based filtering. This object enables complex, custom\nfiltering logic beyond what expression-based filtering can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to determine which items\nshould be processed.\n\n**Implementation approach**\n\nScript-based filtering works by:\n1. Executing the specified function from the referenced script\n2. Passing item data to the function\n3. Using the function's return value (true/false) to determine inclusion\n\n**Common use cases**\n\nScript filtering is ideal for:\n- Complex business logic that can't be expressed as simple rules\n- Dynamic filtering criteria that change based on external factors\n- Content-based filtering that requires deep inspection\n- Advanced pattern matching beyond simple wildcards\n- Multi-stage filtering with intermediate logic\n\nFor AI agents: Only use script-based filtering when expression-based filtering is insufficient.\nScript filtering requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to the Script resource that contains the filtering logic. This must be a valid\nObjectId of a Script resource that exists in the system.\n\nThe referenced script must contain the function specified in the `function` field\nand must be written to handle filtering specifically. The script receives\nitem data as its input and must return a boolean value indicating whether\nto process the item (true) or skip it (false).\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n"},"function":{"type":"string","description":"Name of the function within the script to execute for filtering decisions. This function\nmust exist in the script referenced by _scriptId.\n\n**Function requirements**\n\nThe specified function must:\n- Accept item data as its first parameter\n- Return a boolean value (true to process the item, false to skip it)\n- Handle errors gracefully\n- Execute efficiently (as it may run for many items)\n\n**Function signature**\n\n```javascript\nfunction filterItems(itemData) {\n  // itemData contains properties of the item being evaluated\n  // Custom logic here\n  return true; // or false to skip the item\n}\n```\n\nFor AI agents: Ensure the function name exactly matches a function defined in the\nreferenced script, as mismatches will cause the filter to fail.\n"}}}}},"Transform":{"type":"object","description":"Configuration for transforming data during processing operations. This object enables\nreshaping of records.\n\n**Transformation capabilities**\n\nCeligo's transformation engine offers powerful features for data manipulation:\n- Precise field mapping with JSONPath expressions\n- Support for any level of nested arrays\n- Formula-based field value generation\n- Dynamic references to flow and integration settings\n\n**Implementation approaches**\n\nThere are two distinct transformation mechanisms available:\n\n**Rule-Based Transformation (`type: \"expression\"`)**\n- **Best For**: Most transformation scenarios from simple to complex\n- **Capabilities**: Field mapping, formula calculations, lookups, nested data handling\n- **Advantages**: Visual configuration, no coding required, intuitive interface\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear mapping requirements or need to reshape data structure\n\n**Script-Based Transformation (`type: \"script\"`)**\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Capabilities**: Full programmatic control, custom processing, complex business rules\n- **Advantages**: Maximum flexibility, can implement any transformation logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Visual transformation tools aren't sufficient for your use case\n","properties":{"type":{"type":"string","description":"Determines which transformation mechanism to use. This choice affects which properties\nmust be configured and how transformation logic is implemented.\n\n**Available types**\n\n**Rule-Based Transformation (`\"expression\"`)**\n- **Required Config**: The `expression` object with mapping definitions\n- **Behavior**: Applies declarative rules to reshape data\n- **Best For**: Most transformation scenarios from simple to complex\n- **Advantages**: Visual configuration, no coding required\n\n**Script-Based Transformation (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to transform data\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard data transformations, use `\"expression\"`\n2. For complex logic or specialized processing, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based transformations. This object enables reshaping data\nwithout requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define transformation rules that\ncan map, modify, and generate data elements.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Field mappings define how input data is transformed to target fields\n- Formulas can be used to calculate or generate new values\n- Lookups can enrich data by fetching related information\n- Mode determines how records are processed (create new or modify existing)\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"2\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"2\"\nfor current implementations.\n","enum":["2"]},"rulesTwoDotZero":{"type":"object","description":"Configuration for version 2 transformation rules. This object contains the core logic\nfor how data is mapped, enriched, and transformed.\n\n**Capabilities**\n\nTransformation 2.0 provides:\n- Precise field mapping with JSONPath expressions\n- Support for deeply nested data structures\n- Formula-based field generation\n- Dynamic lookups for data enrichment\n- Multiple operating modes to fit different scenarios\n","properties":{"mode":{"type":"string","description":"Transformation mode that determines how records are handled during processing.\n\n**Available modes**\n\n**Create Mode (`\"create\"`)**\n- **Behavior**: Builds entirely new output records from inputs\n- **Use When**: Output structure differs significantly from input\n- **Advantage**: Clean slate approach, no field inheritance\n\n**Modify Mode (`\"modify\"`)**\n- **Behavior**: Makes targeted edits to existing records\n- **Use When**: Output structure should remain similar to input\n- **Advantage**: Preserves unmapped fields from the original record\n","enum":["create","modify"]},"mappings":{"$ref":"#/components/schemas/Mappings"},"lookups":{"allOf":[{"description":"Shared lookup tables used across all mappings defined in the transformation rules.\n\n**Purpose**\n\nLookups provide centralized value translation that can be referenced from any mapping\nin your transformation configuration. They enable consistent translation of codes, IDs,\nand values between systems without duplicating translation logic.\n\n**Usage in transformations**\n\nLookups are particularly valuable in transformations for:\n\n- **Data Normalization**: Standardizing values from diverse source systems\n- **Code Translation**: Converting between different coding systems (e.g., status codes)\n- **Field Enrichment**: Adding descriptive values based on ID or code lookups\n- **Cross-Reference Resolution**: Mapping identifiers between integrated systems\n\n**Implementation**\n\nLookups are defined once in this array and referenced by name in mappings:\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"statusMapping\",\n    \"map\": {\n      \"A\": \"Active\",\n      \"I\": \"Inactive\",\n      \"P\": \"Pending\"\n    },\n    \"default\": \"Unknown Status\"\n  }\n]\n```\n\nThen referenced in mappings using the lookupName property:\n\n```json\n{\n  \"generate\": \"status\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.statusCode\",\n  \"lookupName\": \"statusMapping\"\n}\n```\n\nThe system automatically applies the lookup during transformation processing.\n\nFor complete details on lookup properties and behavior, see the Lookups schema.\n"},{"$ref":"#/components/schemas/Lookups"}]}}}}},"script":{"type":"object","description":"Configuration for programmable script-based transformations. This object enables complex, custom\ntransformation logic beyond what expression-based transformations can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to transform data according to\nspecialized business rules or complex algorithms.\n\n**Implementation approach**\n\nScript-based transformation works by:\n1. Executing the specified function from the referenced script\n2. Passing input data to the function\n3. Using the function's return value as the transformed output\n\n**Common use cases**\n\nScript transformation is ideal for:\n- Complex business logic that can't be expressed through mappings\n- Algorithmic transformations requiring computation\n- Dynamic transformations based on external factors\n- Legacy system data format compatibility\n- Multi-stage processing with intermediate steps\n\nOnly use script-based transformation when expression-based transformation is insufficient.\nScript transformation requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the transformation logic.\n\nThe referenced script should contain the function specified in the\n'function' property.\n","format":"objectid"},"function":{"type":"string","description":"Name of the function within the script to execute for transformation. This function\nmust exist in the script referenced by _scriptId.\n"}}}}},"Mappings":{"type":"array","description":"Array of field mapping configurations for transforming data from one format into another.\n\n**Guidance**\n\nThis schema is designed around RECURSION as its core architectural principle. Understanding this recursive\nnature is essential for building effective mappings:\n\n1. The schema is self-referential by design - a mapping can contain nested mappings of the same structure\n2. Complex data structures (nested objects, arrays of objects, arrays of arrays of objects) are ALL\n   handled through this recursive pattern\n3. Each mapping handles one level of the data structure; deeper levels are handled by nested mappings\n\nWhen generating mappings programmatically:\n- For simple fields (string, number, boolean): Create single mapping objects\n- For objects: Create a parent mapping with nested 'mappings' array containing child field mappings\n- For arrays: Use 'buildArrayHelper' with extract paths defining array inputs and\n  recursive 'mappings' to define object structures\n\nThe system will process these nested structures recursively during runtime, ensuring proper construction\nof complex hierarchical data while maintaining excellent performance.\n","items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}}},"items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}},"Lookups":{"type":"array","description":"Configuration for value-to-value transformations using lookup tables.\n\n**Purpose**\n\nLookups provide a way to translate values from one system to another. They transform\ninput values into output values using either static mapping tables or\ndynamic lookup caches.\n\n**Lookup mechanisms**\n\nThere are two distinct lookup mechanisms available:\n\n1. **Static Lookups**: Define a simple key-value map object and store it as part of your resource\n   - Best for: Small, fixed sets of values that rarely change\n   - Implementation: Configure the `map` object with input-to-output value mappings\n   - Example: Country codes, status values, simple translations\n\n2. **Dynamic Lookups**: Reference an existing 'Lookup Cache' resource in your Celigo account\n   - Best for: Large datasets, frequently changing values, or complex reference data\n   - Implementation: Configure `_lookupCacheId` to reference cached data maintained independently\n   - Example: Product catalogs, customer databases, pricing information\n\n**Property usage**\n\nThere are two mutually exclusive ways to configure lookups, depending on which mechanism you choose:\n\n1. **For Static Mappings**: Configure the `map` property with a direct key-value object\n   ```json\n   \"map\": {\"US\": \"United States\", \"CA\": \"Canada\"}\n   ```\n\n2. **For Dynamic Lookups**: Configure the following properties:\n   - `_lookupCacheId`: Reference to the lookup cache resource\n   - `extract`: JSON path to extract specific value from the returned lookup object\n\n**When to use**\n\nLookups are ideal for:\n\n1. **Value Translation**: Mapping codes or IDs to human-readable values\n\n2. **Data Enrichment**: Adding related information to records during processing\n\n3. **Normalization**: Ensuring consistent formatting of values across systems\n\n**Implementation details**\n\nLookups can be referenced in:\n\n1. **Field Mappings**: Direct use in field transformation configurations\n\n2. **Handlebars Templates**: Use within templates with the syntax:\n   ```\n   {{lookup 'lookupName' record.fieldName}}\n   ```\n\n**Example usage**\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"countryCodeToName\",\n    \"map\": {\n      \"US\": \"United States\",\n      \"CA\": \"Canada\",\n      \"UK\": \"United Kingdom\"\n    },\n    \"default\": \"Unknown Country\",\n    \"allowFailures\": true\n  },\n  {\n    \"name\": \"productDetails\",\n    \"_lookupCacheId\": \"60a2c4e6f321d800129a1a3c\",\n    \"extract\": \"$.details.price\",\n    \"allowFailures\": false\n  }\n]\n```\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique identifier for the lookup table within this configuration.\n\nThis name must be unique within the scope where the lookup is defined and is used to reference\nthe lookup in handlebars templates with the syntax {{lookup 'name' value}}.\n\nChoose descriptive names that indicate the transformation purpose, such as:\n- \"countryCodeToName\" for country code to full name conversion\n- \"statusMapping\" for status code translations\n- \"departmentCodes\" for department code to name mapping\n"},"map":{"type":"object","description":"The lookup mapping table as key-value pairs.\n\nThis object contains the input values as keys and their corresponding\noutput values. When a input value matches a key in this object,\nit will be replaced with the corresponding value.\n\nThe map should be kept to a reasonable size (typically under 100 entries)\nfor optimal performance. For larger mapping requirements, consider using\ndynamic lookups instead.\n\nMaps can include:\n- Simple code to name conversions: {\"US\": \"United States\"}\n- Status transformations: {\"A\": \"Active\", \"I\": \"Inactive\"}\n- ID to name mappings: {\"100\": \"Marketing\", \"200\": \"Sales\"}\n\nValues can be strings, numbers, or booleans, but all are stored as strings\nin the configuration.\n"},"_lookupCacheId":{"type":"string","description":"Reference to a LookupCache resource that contains the reference data for the lookup.\n\n**Purpose**\n\nThis field connects the lookup to an external data source that has been cached in the system.\nUnlike static lookups that use the `map` property, dynamic lookups can reference large datasets\nor frequently changing information without requiring constant updates to the integration.\n\n**Implementation details**\n\nThe LookupCache resource referenced by this ID contains:\n- The data records to be used as a reference source\n- Configuration for how the data should be indexed and accessed\n- Caching parameters to balance performance with data freshness\n\n**Usage patterns**\n\nCommonly used to reference:\n- Product catalogs or SKU databases\n- Customer or account information\n- Pricing tables or discount rules\n- Complex business logic lookup tables\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n","format":"objectid"},"extract":{"type":"string","description":"JSON path expression that extracts a specific value from the cached lookup object.\n\n**Purpose**\n\nWhen using dynamic lookups with a LookupCache, this JSON path identifies which field to extract\nfrom the cached object after it has been retrieved using the lookup key.\n\n**Implementation details**\n\n- Must use JSON path syntax (similar to mapping extract fields)\n- Operates on the cached object returned by the lookup operation\n- Examples:\n  - \"$.name\" - Extract the name field from the top level\n  - \"$.details.price\" - Extract a nested price field\n  - \"$.attributes[0].value\" - Extract a value from the first element of an array\n\n**Usage scenario**\n\nWhen a lookup cache contains complex objects:\n```json\n// Cache entry for key \"PROD-123\":\n{\n  \"id\": \"PROD-123\",\n  \"name\": \"Premium Widget\",\n  \"details\": {\n    \"price\": 99.99,\n    \"currency\": \"USD\",\n    \"inStock\": true\n  }\n}\n```\n\nSetting extract to \"$.details.price\" would return 99.99 as the lookup result.\n\nIf no extract is provided, the entire cached object is returned as the lookup result.\n"},"default":{"type":"string","description":"Default value to use when the source value is not found in the lookup map.\n\nThis value is used as a fallback when:\n1. The source value doesn't match any key in the map\n2. allowFailures is set to true\n\nSetting an appropriate default helps prevent flow failures due to unexpected\nvalues and provides predictable behavior for edge cases.\n\nCommon default patterns include:\n- Descriptive unknowns: \"Unknown Country\", \"Unspecified Status\"\n- Original value indicators: \"{Original Value}\", \"No mapping found\"\n- Neutral values: \"Other\", \"N/A\", \"Miscellaneous\"\n\nIf allowFailures is false and no default is specified, the flow will fail\nwhen encountering unmapped values.\n"},"allowFailures":{"type":"boolean","description":"When true, missing lookup values will use the default value rather than causing an error.\n\n**Behavior control**\n\nThis field determines how the system handles source values that don't exist in the map:\n\n- true: Use the default value for missing mappings and continue processing\n- false: Treat missing mappings as errors, failing the record\n\n**Recommendation**\n\nSet this to true when:\n- New source values might appear over time\n- Data quality issues could introduce unexpected values\n- Processing should continue even with imperfect mapping\n\nSet this to false when:\n- Complete data accuracy is critical\n- All possible source values are known and controlled\n- Missing mappings indicate serious data problems that should be addressed\n\nThe best practice is typically to set allowFailures to true with a meaningful\ndefault value, so flows remain operational while alerting you to missing mappings.\n"}}}},"Form":{"type":"object","description":"Configuration for creating user-friendly settings forms that make it easier for less technical users\nto configure integration resources.\n\n**Settings form builder**\n\nThe Settings Form Builder allows you to create or edit user-friendly fields that prompt for text entry\nor selections that will be returned as settings applied to this resource. Your forms can include any\nfield types that you see elsewhere in integrator.io, such as:\n\n- Text fields\n- Dropdown selections\n- Checkboxes\n- Radio buttons\n- Date pickers\n- Multi-select fields\n- Search fields\n\nForm fields make it much easier for less technical users to work with your integration settings by:\n\n- Providing clear labels and help text\n- Enforcing validation rules\n- Offering pre-defined selection options\n- Grouping related settings logically\n- Supporting conditional visibility\n- Creating a consistent user experience\n","properties":{"form":{"type":"object","description":"Configuration that defines the structure, fields, and behavior of the settings form.\n\nThis object contains the complete definition of the form's layout, fields, validation rules,\nand interactive behaviors. The specific structure depends on the form complexity and can include\nfield definitions, sections, conditional display logic, and default values.\n\nThe form configuration is typically created and managed through the visual Form Builder interface\nrather than edited directly as JSON.\n","properties":{"fieldMap":{"type":"object","description":"A mapping of field identifiers to their configuration objects.\nEach key in this object represents a unique field ID, and the value contains\nall the configuration settings for that specific form field.\n","additionalProperties":{"type":"object","description":"Configuration for an individual form field.\n","properties":{"id":{"type":"string","description":"Unique identifier for this field within the form.\nThis value typically matches the key in the fieldMap object.\n"},"name":{"type":"string","description":"Name of the field, used as the property name when generating the settings object\nfrom the submitted form data.\n"},"type":{"type":"string","description":"The type of form control to render for this field.\n","enum":["text","checkbox","radiogroup","relativeuri","editor","keyvalue","select","multiselect","toggle","datetime","date"]},"label":{"type":"string","description":"Display label shown next to the field in the form.\n"},"description":{"type":"string","description":"Detailed explanation text that appears below the field, providing more context\nthan the label or helpText.\n"},"helpText":{"type":"string","description":"Explanatory text that appears when hovering over the help icon next to the field.\nUsed to provide additional guidance on how to use the field.\n"},"required":{"type":"boolean","description":"When true, the field must have a value before the form can be submitted.\n","default":false},"multiline":{"type":"boolean","description":"For text fields, determines whether the input should be a multi-line text area\ninstead of a single-line input.\n","default":false},"rowsMax":{"type":"integer","description":"For multiline text fields, specifies the maximum number of visible rows.\n"},"inputType":{"type":"string","description":"For text fields, specifies the HTML input type attribute to apply additional\nvalidation or specialized input behavior.\n","enum":["text","number","email","password","tel","url"]},"delimiter":{"type":"string","description":"For text fields, specifies a character to use for splitting the input into an array.\nUsed for collecting multiple values in a single text field.\n"},"mode":{"type":"string","description":"For editor fields, specifies the type of content being edited for syntax highlighting.\n","enum":["json","xml","csv","text"]},"keyName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the key input.\n"},"valueName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the value input.\n"},"showDelete":{"type":"boolean","description":"For keyvalue fields, determines whether to show a delete button for each key-value pair.\n"},"doNotAllowFutureDates":{"type":"boolean","description":"For date and datetime fields, restricts selection to dates not in the future.\n"},"skipTimezoneConversion":{"type":"boolean","description":"For datetime fields, prevents automatic timezone conversion of the date value.\n"},"options":{"type":"array","description":"For fields that present choices (select, multiselect, radiogroup, toggle), defines\nthe available options.\n","items":{"oneOf":[{"title":"Option group","type":"object","properties":{"items":{"type":"array","items":{"oneOf":[{"title":"String value","type":"string"},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]},"description":"Array of option values/labels to display in the selection control.\n"}}},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]}},"visibleWhen":{"type":"array","description":"Conditional display rules that determine when this field should be visible.\nIf empty or not provided, the field is always visible.\n","items":{"type":"object","properties":{"field":{"type":"string","description":"The ID of another field whose value controls the visibility of this field.\n"},"is":{"type":"array","items":{"type":"string"},"description":"Array of values - if the referenced field has any of these values,\nthis field will be visible.\n"}}}}}}},"layout":{"type":"object","description":"Defines how the form fields are arranged and grouped in the UI.\nThe layout can organize fields into columns, sections, or other visual groupings.\n","properties":{"type":{"type":"string","description":"The type of layout to use for the form.\n","enum":["column"]},"containers":{"type":"array","description":"Array of container objects that group fields or contain nested containers.\nEach container can represent a column, box, indented section, or collapsible section.\n","items":{"type":"object","properties":{"type":{"type":"string","description":"The visual style of the container.\n","enum":["indent","box","collapse"]},"label":{"type":"string","description":"The heading text displayed for this container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this container.\nEach ID must correspond to a key in the fieldMap object.\n"},"containers":{"type":"array","description":"Nested containers within this container. Allows for hierarchical organization\nof fields with different visual styles.\n","items":{"type":"object","properties":{"label":{"type":"string","description":"The heading text displayed for this nested container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this nested container.\n"}}}}}}}}}},"additionalProperties":true},"init":{"type":"object","description":"Configuration for custom JavaScript initialization that executes when the form is first loaded.\n\nThis object defines a JavaScript hook that prepares the form for use, sets initial field values,\nperforms validation, or otherwise customizes the form behavior before it is displayed to the user.\n\n**Function signature**\n\nThe initialization function is invoked with a single 'options' argument containing contextual information:\n```javascript\nfunction formInit(options) {\n  // Process options and return the form object\n  return options.resource.settingsForm.form;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.resource` - The current resource being configured\n- `options.parentResource` - The parent of the current resource\n- `options.grandparentResource` - The grandparent of the current resource\n- `options.license` - For integration apps, the license provisioned to the integration\n- `options.parentLicense` - For integration apps, the parent of the license\n\n\n**Common uses**\n\n- Dynamically generate field options based on resource configuration\n- Pre-populate default values from related resources\n- Apply conditional logic that depends on resource properties\n- Add, remove, or modify form fields based on user permissions or account settings\n- Fetch external data to populate selection options\n- Implement complex validation rules that depend on resource context\n- Create branching form experiences based on user selections\n\n**Return value**\n\nThe function must return a valid form object that the UI can render.\nThrowing an exception will signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called when the form\nis initialized and should handle any custom setup logic.\n\nThe function must follow the expected signature and return a valid form object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the initialization function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}}}},"PreSave":{"type":"object","description":"Defines a JavaScript hook that executes before the resource is saved.\n\nThis hook allows for programmatic validation, transformation, or enrichment of the\nresource itself before it is persisted. It can be used to enforce business rules,\nset derived properties, or implement cross-field validations that can't be expressed\nthrough the standard UI.\n\n**Function signature**\n\nThe preSave function is invoked with a single 'options' argument containing:\n```javascript\nfunction preSave(options) {\n  // Process options and return the modified resource\n  return options.newResource;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.newResource` - The resource being saved (with pending changes)\n- `options.oldResource` - The previous version of the resource (before changes)\n\n\n**Common uses**\n\n- Enforcing complex business rules across multiple fields\n- Automatically deriving field values based on other configuration\n- Performing validation that depends on external systems or data\n- Normalizing or standardizing configuration values\n- Adding computed or derived properties\n- Implementing versioning or change tracking\n- Dynamically looking up data using the Celigo API module to enrich configuration\n\n**Return value**\n\nThe function must return the newResource object (potentially modified) to be saved.\nThrowing an exception will prevent saving and signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called just before\nthe resource is saved.\n\nThe function must follow the expected signature and return the resource object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the preSave function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}},"Settings":{"type":"object","description":"Configuration settings that can be accessed by hooks, filters, mappings and handlebars templates at runtime.\n\nIt enables customization of the resource's logic, allowing hooks, mappings, filters, and\nhandlebars to access and apply the settings at runtime.\n\n**Usage**\n\nThe settings object can store arbitrary JSON data that you want to save with the resource.\nWhile it's often populated through a form defined in the `settingsForm` field, you can also:\n\n- Directly provide JSON settings without using a form\n- Store configuration values used by hooks and templates\n- Create resource-specific constants and parameters\n- Maintain lookup tables or mapping structures\n- Define conditional logic parameters\n\n**Accessibility**\n\nSettings are available in:\n- All handlebars fields for building dynamic payloads\n- Field mapping expressions\n- JavaScript hooks via the options object\n- Filters and transformations\n\n**Best practices**\n\nFor non-technical users, create a custom form instead of editing the JSON directly.\nThis provides a user-friendly interface for updating settings without requiring JSON knowledge.\n","additionalProperties":true},"ResourceResponse":{"type":"object","description":"Core response fields shared by all Celigo resources","properties":{"_id":{"type":"string","format":"objectId","readOnly":true,"description":"Unique identifier for the resource.\n\nThe _id is used in:\n- API endpoints that operate on a specific resource (e.g., GET, PUT, DELETE)\n- References from other resources (e.g., flows that use this resource)\n- Job history and error tracking\n\nFormat: 24-character hexadecimal string\n"},"createdAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was initially created.\n\nThis read-only field is automatically set during resource creation and cannot\nbe modified. It provides an audit trail for when the resource was first added\nto the system, which can be useful for:\n\n- Resource lifecycle management\n- Audit and compliance reporting\n- Troubleshooting integration timelines\n- Identifying older resources that may need review\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"},"lastModified":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was most recently updated.\n\nThis read-only field is automatically updated whenever any property of the\nresource is modified. It provides an audit trail that can be used for:\n\n- Determining if a resource has changed since it was last reviewed\n- Monitoring configuration changes during troubleshooting\n- Implementing cache invalidation strategies\n- Synchronizing related resources based on modification time\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix)\nand will always be equal to or later than the createdAt timestamp.\n"},"deletedAt":{"type":["string","null"],"format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was marked for deletion.\n\nWhen this field is present and contains a valid timestamp, it indicates\nthat the resource has been soft-deleted (moved to the recycle bin) but not\nyet permanently removed from the system. This allows for recovery of\naccidentally deleted resources within a specified retention period.\n\nThe deletedAt timestamp enables:\n- Filtering deleted resources from active resource listings\n- Implementing time-based retention policies for permanent deletion\n- Tracking deletion events for audit and compliance purposes\n- Resource recovery workflows with clear timeframes\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\nWhen null or absent, the resource is considered active.\n"}},"required":["_id"]},"IAResourceResponse":{"type":"object","description":"Integration app response fields for resources that are part of integration apps","properties":{"_integrationId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the specific integration instance that contains this resource.\n\nThis field is only populated for resources that are part of an integration app\ninstallation. It contains the unique identifier (_id) of the integration\nresource that was installed in the account.\n\nThe integration instance represents a specific installed instance of an\nintegration app, with its own configuration, settings, and runtime environment.\n\nThis reference enables:\n- Tracing the resource back to its parent integration instance\n- Permission and access control based on integration ownership\n- Lifecycle management (enabling/disabling, updating, or uninstalling)\n"},"_connectorId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the integration app that defines this resource.\n\nThis field is only populated for resources that are part of an integration app.\nIt contains the unique identifier (_id) of the integration app (connector)\nthat defines the structure, behavior, and templates for this resource.\n\nThe integration app is the published template that can be installed\nmultiple times across different accounts, with each installation creating\na separate integration instance (referenced by _integrationId).\n\nThis reference enables:\n- Identifying the source integration app for this resource\n- Determining which template version is being used\n- Linking to documentation, support, and marketplace information\n"}}},"AIDescription":{"type":"object","description":"AI-generated descriptions and documentation for the resource.\n\nThis object contains automatically generated content that helps users\nunderstand the purpose, behavior, and configuration of the resource without\nrequiring them to analyze the technical details. The AI-generated content\nis sanitized and safe for display in the UI.\n","properties":{"summary":{"type":"string","description":"Brief AI-generated summary of the resource's purpose and functionality.\n\nThis concise description provides a quick overview of what the resource does,\nwhat systems it interacts with, and its primary role in the integration.\nThe summary is suitable for display in list views, dashboards, and other\ncontexts where space is limited.\n\nMaximum length: 10KB\n"},"detailed":{"type":"string","description":"Comprehensive AI-generated description of the resource's functionality.\n\nThis detailed explanation covers the resource's purpose, configuration details,\ndata flow patterns, filtering logic, and other technical aspects. It provides\nin-depth information suitable for documentation, tooltips, or detailed views\nin the administration interface.\n\nThe content may include HTML formatting for improved readability.\n\nMaximum length: 10KB\n"},"generatedOn":{"type":"string","format":"date-time","description":"Timestamp indicating when the AI description was generated.\n\nThis field helps track the freshness of the AI-generated content and\ndetermine when it might need to be regenerated due to changes in the\nresource's configuration or behavior.\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"}}},"APIM":{"type":"array","description":"Read-only field that stores information about the integration resources\npublished in the API Management (APIM) platform.\n\nThis field tracks the relationship between integrator.io resources and their\npublished counterparts in the Gravitee API Management platform, which is\ntightly integrated with the Celigo UI. When resources are \"pushed\" to Gravitee,\nthis field is populated with the relevant identifiers and statuses.\n","items":{"type":"object","properties":{"apiId":{"type":"string","description":"Identifier for the API where this integrator.io resource is published in the APIM.\n\nThis is a Gravitee resource identifier (not prefixed with underscore like Celigo IDs)\nthat uniquely identifies the API in the API Management platform.\n"},"flowId":{"type":"string","description":"Identifier for the flow within the API where this integrator.io resource is linked.\n\nWhen an API has multiple integrator.io resources linked, each resource is associated\nwith a specific flow in the API, identified by this field. This is a Gravitee\nresource identifier.\n"},"status":{"type":"string","description":"Indicates the publishing stage of the integrator.io resource in APIM.\n\nPossible values:\n- 'oaspending': The resource is published but the OpenAPI Specification (OAS) is not\n  yet published. The apiId will be updated with the API ID created in APIM.\n- 'published': The OpenAPI Specification for the integrator.io resource has been\n  successfully uploaded to APIM.\n","enum":["oaspending","published"]}}}},"Response-4":{"type":"object","description":"Complete export object as returned by the API","allOf":[{"$ref":"#/components/schemas/Request-4"},{"$ref":"#/components/schemas/ResourceResponse"},{"$ref":"#/components/schemas/IAResourceResponse"},{"type":"object","properties":{"aiDescription":{"$ref":"#/components/schemas/AIDescription"},"apim":{"$ref":"#/components/schemas/APIM"},"apiIdentifier":{"type":"string","readOnly":true,"description":"API identifier assigned to this export."},"_sourceId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the source resource this export was created from."},"_templateId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the template used to create this export."},"draft":{"type":"boolean","readOnly":true,"description":"Indicates whether this export is in draft state."},"draftExpiresAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the draft version of this export expires."},"debugUntil":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp until which debug logging is enabled for this export."}}}]},"Request-4":{"type":"object","description":"Fields that can be sent when creating or updating an export","properties":{"name":{"type":"string","description":"Descriptive identifier for the export resource in human-readable format.\n\nThis string serves as the primary display name for the export across the application UI and is used in:\n- API responses when listing exports\n- Error and audit logs for traceability\n- Flow builder UI components\n- Job history and monitoring dashboards\n\nWhile not required to be globally unique in the system, using descriptive, unique names is strongly recommended\nfor clarity when managing multiple integrations. The name should indicate the data source and purpose.\n\nMaximum length: 255 characters\nAllowed characters: Letters, numbers, spaces, and basic punctuation\n"},"description":{"type":"string","description":"Optional free-text field that provides additional context about the export's purpose and functionality.\n\nWhile not used for operational functionality in the API, this field serves several important purposes:\n- Helps document the intended data flow for this export\n- Provides context for other developers and systems interacting with this resource\n- Appears in the admin UI and export listings for easier identification\n- Can be used by AI agents to better understand the export's purpose when making recommendations\n\nBest practice is to include information about:\n- The source system and data being exported\n- The intended destination for this data\n- Any special filtering or business rules applied\n- Dependencies on other systems or processes\n\nMaximum length: 10240 characters\n","maxLength":10240},"_connectionId":{"format":"objectId","type":"string","description":"Reference to the connection resource that this export will use to access the external system.\n\nThis field contains the unique identifier of a connection resource that must exist in the system prior to creating the export.\nThe connection provides:\n- Authentication credentials and methods for the external system\n- Base URL and connectivity settings\n- Rate limiting and retry configurations\n- Connection-specific headers and parameters\n\nThe connection type must be compatible with the adaptorType specified for this export.\nFor example, if adaptorType is \"HTTPExport\", _connectionId must reference a connection with type \"http\".\n\nThis field is not required for webhook/listener exports.\n\nFormat: 24-character hexadecimal string\n"},"adaptorType":{"type":"string","description":"Specifies the underlying technology adapter that processes this export's operations.\n\nThis field determines:\n- Which connection types are compatible with this export\n- Which API endpoints and protocols will be used\n- Which export-specific configuration objects must be provided\n- The available features and capabilities of the export\n\nThe value must match an available adapter in the system and should correspond to the\nexternal system being accessed. For example:\n- \"HTTPExport\" for generic REST/SOAP APIs\n- \"SalesforceExport\" for Salesforce-specific operations\n- \"NetSuiteExport\" for NetSuite-specific operations\n- \"FTPExport\" for file transfers via FTP/SFTP\n- \"WebhookExport\" for realtime event listeners that receive data via incoming HTTP requests.\n\nWhen creating an export, this field must be set correctly and cannot be changed afterward\nwithout creating a new export resource.\n\nIMPORTANT: When using a specific adapter type (e.g., \"SalesforceExport\"), you must also\nprovide the corresponding configuration object (e.g., \"salesforce\").\n","enum":["HTTPExport","FTPExport","AS2Export","S3Export","NetSuiteExport","SalesforceExport","JDBCExport","RDBMSExport","MongodbExport","DynamodbExport","WrapperExport","SimpleExport","WebhookExport","FileSystemExport"]},"type":{"type":"string","description":"Defines the fundamental operational mode of the export resource. This field determines:\n- What data is extracted and how\n- Which configuration objects are required\n- How the export appears and functions in the flow builder UI\n- The export's scheduling and execution behavior\n\n**Export types and their configurations**\n\n**Standard Export (undefined/null)**\n- **Behavior**: Retrieves all available records from the source system or structured file data that needs parsing. Default behavior is to get all records from the source system.\n- **UI Appearance**: \"Export\", \"Lookup\", or \"Transfer\" (depending on configuration)\n- **Use Case**: General purpose data extraction, full data synchronization, or structured file parsing (CSV, XML, JSON, etc.)\n- **Important Note**: For file exports that PARSE file contents into records (e.g., CSV files from NetSuite file cabinet), use this standard export type (null/undefined) with the connector's file configuration (e.g., netsuite.file). Do NOT use type=\"blob\" for parsed file exports.\n\n**\"delta\"**\n- **Behavior**: Retrieves only records changed since the last execution\n- **UI Appearance**: \"Export\" or \"Lookup\" (based on isLookup field value)\n- **Required Config**: Must provide the \"delta\" object with dateField configuration\n- **Use Case**: Incremental data synchronization, change detection\n- **Dependencies**: Requires a system that supports timestamp-based filtering\n\n**\"test\"**\n- **Behavior**: Retrieves a limited subset of records (for testing purposes)\n- **UI Appearance**: \"Export\" or \"Lookup\" (based on isLookup field value)\n- **Required Config**: Must provide the \"test\" object with limit configuration\n- **Use Case**: Integration development, testing, and validation\n\n**\"once\"**\n- **Behavior**: Retrieves records one time and marks them as processed in the source\n- **UI Appearance**: \"Export\" or \"Lookup\" (based on isLookup field value)\n- **Required Config**: Must provide the \"once\" object with booleanField configuration\n- **Use Case**: One-time exports, ensuring records aren't processed twice\n- **Dependencies**: Requires a system with updateable boolean/flag fields\n\n**\"blob\"**\n- **Behavior**: Retrieves raw files without parsing them into structured data records. The file content is transferred as-is without any parsing or transformation.\n- **UI Appearance**: \"Transfer\" flow step\n- **Required Config**: Configuration varies by connector (e.g., filepath for FTP, http.type=\"file\" for HTTP, netsuite.blob for NetSuite)\n- **Use Case**: Raw file transfers for binary files (images, PDFs, executables) where file content should NOT be parsed into data records\n- **Important Note**: Do NOT use \"blob\" when you want to parse file contents into records. For file parsing (CSV, XML, JSON files), leave type as null/undefined and configure the connector's file object (e.g., netsuite.file for NetSuite). The \"blob\" type is specifically for transferring files without parsing them.\n\n**\"webhook\"**\n- **Behavior**: Creates an endpoint that listens for incoming HTTP requests\n- **UI Appearance**: \"Listener\" flow step\n- **Required Config**: Must provide the \"webhook\" object with security configurations\n- **Use Case**: Real-time integration, event-driven architecture\n- **Dependencies**: Requires external system capable of making HTTP calls\n\n**\"distributed\"**\n- **Behavior**: Creates an endpoint that listens for incoming requests from NetSuite or Salesforce\n- **UI Appearance**: \"Listener\" flow step\n- **Required Config**: Must provide the \"distributed\" object with security configurations\n- **Use Case**: Real-time integration, event-driven architecture for NetSuite or Salesforce\n- **Dependencies**: Requires NetSuite or Salesforce to be configured to send events to the endpoint\n\n**\"simple\"**\n- **Behavior**: Allows for direct file uploads via the data loader UI\n- **UI Appearance**: \"Data loader\" flow step\n- **Required Config**: Must provide the \"simple\" object with file format configuration\n- **Use Case**: Manual data uploads, user-driven data integration\n\nThe value directly affects which configuration objects must be provided in the export resource.\nFor example, if type=\"delta\", you must include a valid \"delta\" object in your configuration.\n","enum":["webhook","test","delta","once","tranlinedelta","simple","blob","distributed","stream"]},"pageSize":{"type":"integer","description":"Controls the number of records in each data page when streaming data between systems.\n\nThis field directly impacts how data is streamed from the source to destination system:\n- Records are exported in batches (pages) of this size\n- Each page is immediately sent to the destination system upon completion\n- Pages are capped at a maximum size of 5 MB regardless of record count\n- Processing continues with the next page until all data is transferred\n\nConsiderations for setting this value:\n- The destination system's API often imposes limits on batch sizes\n  (e.g., NetSuite and Salesforce have specific record limits per API call)\n- Larger values improve throughput for simple records but may cause timeouts with complex data\n- Smaller values provide more granular error recovery but increase the number of API calls\n- Finding the optimal value typically requires balancing source system export speed with\n  destination system import capacity\n\nThe value must be a positive integer. If not specified, the default value is 20.\nThere is no built-in maximum value, but practical limits are determined by:\n1. The 5 MB maximum page size limit\n2. The destination system's API constraints\n3. Memory and performance considerations\n","default":20},"dataURITemplate":{"type":"string","description":"Defines a template for generating direct links to records in the source application's UI.\n\nThis field uses handlebars syntax to create dynamic URLs or identifiers based on the exported data.\nThe template is evaluated for each record processed by the export, and the resulting URL is:\n- Stored with error records in the job history database\n- Displayed in the error logs and job monitoring UI\n- Available to downstream steps via the errorContext object\n\nThe template can reference any field in the exported record using the handlebars pattern:\n{{record.fieldName}}\n\nCommon patterns by system type:\n- Salesforce: \"https://my.salesforce.com/lightning/r/Contact/{{record.Id}}/view\"\n- NetSuite: \"https://system.netsuite.com/app/common/entity/custjob.nl?id={{record.internalId}}\"\n- Shopify: \"https://your-store.myshopify.com/admin/customers/{{record.id}}\"\n- Generic APIs: \"{{record.id}}\" or \"{{record.customer_id}}, {{record.email}}\"\n\nThis field is optional but recommended for improved error handling and debugging.\n"},"traceKeyTemplate":{"type":"string","description":"Defines a template for generating unique identifiers for each record processed by this export.\n\nThis field allows you to override the system's default record identification logic by specifying\nexactly which field(s) should be used to uniquely identify each record. The trace key is used to:\n- Track records through the entire integration process\n- Identify duplicate records in the job history\n- Match updated records to previously processed ones\n- Generate unique references in error reporting\n\nThe template uses handlebars syntax and can reference:\n- Single fields: {{record.id}}\n- Combined fields: {{join \"_\" record.customerId record.orderId}}\n- Modified fields: {{lowercase record.email}}\n\nIf a transformation is applied to the exported data before the trace key is evaluated,\nfield references should omit the \"record.\" prefix (e.g., {{id}} instead of {{record.id}}).\n\nIf not specified, the system attempts to identify a unique field in each record automatically,\nbut this may not always select the optimal field for identification.\n\nMaximum length of generated trace keys: 512 characters\n"},"oneToMany":{"$ref":"#/components/schemas/OneToMany"},"pathToMany":{"$ref":"#/components/schemas/PathToMany"},"isLookup":{"type":"boolean","description":"Controls whether this export operates as a lookup resource in integration flows.\n\nWhen set to true, this export's behavior fundamentally changes:\n- It expects and requires input data from a previous flow step\n- It uses input data to dynamically parameterize the export operation\n- The system injects input record fields into API requests via handlebars templates\n- Flow execution waits for this step to complete before proceeding\n- Results are directly passed to subsequent steps\n\nLookup exports are typically used to:\n- Retrieve additional details about records processed earlier in the flow\n- Find matching records in a target system for reference or update operations\n- Enrich data with information from external services\n- Validate data against reference sources\n\nAPI behavior differences when true:\n- Request templating uses both record context and other handlebars variables\n- Export is executed once per input record (or batch, depending on configuration)\n- Rate limiting and concurrency controls apply differently\n\nWhen false (default), the export operates in standard extraction mode, pulling data\nindependently without requiring input from previous flow steps.\n"},"groupByFields":{"$ref":"#/components/schemas/GroupBy"},"delta":{"$ref":"#/components/schemas/Delta"},"test":{"$ref":"#/components/schemas/Test"},"once":{"$ref":"#/components/schemas/Once"},"webhook":{"$ref":"#/components/schemas/Webhook"},"distributed":{"$ref":"#/components/schemas/Distributed"},"filesystem":{"$ref":"#/components/schemas/FileSystem-2"},"simple":{"type":"object","description":"Configuration for data loader exports that only run in data loader specific flows.\nNote: This field and all its properties are only relevant when the 'type' field is set to 'simple'.\n","properties":{"file":{"$ref":"#/components/schemas/File-2"}}},"http":{"$ref":"#/components/schemas/Http-2"},"file":{"$ref":"#/components/schemas/File-2"},"salesforce":{"$ref":"#/components/schemas/Salesforce-3"},"as2":{"$ref":"#/components/schemas/AS2-2"},"dynamodb":{"$ref":"#/components/schemas/DynamoDB-2"},"ftp":{"$ref":"#/components/schemas/FTP-2"},"jdbc":{"$ref":"#/components/schemas/JDBC-2"},"mongodb":{"$ref":"#/components/schemas/MongoDB-2"},"netsuite":{"$ref":"#/components/schemas/NetSuite-3"},"rdbms":{"$ref":"#/components/schemas/RDBMS-2"},"s3":{"$ref":"#/components/schemas/S3-3"},"wrapper":{"$ref":"#/components/schemas/Wrapper-3"},"parsers":{"$ref":"#/components/schemas/Parsers"},"filter":{"allOf":[{"description":"Configuration for selectively processing records from an export based on their field values.\nThis object enables precise control over which records continue through the flow.\n\n**Filter behavior**\n\nWhen configured, the filter is applied immediately after records are retrieved from the source system:\n- Records that match the filter criteria continue through the flow\n- Records that don't match are silently dropped\n- No partial record processing is performed\n\n**Available filter fields**\nThe fields available for filtering are the data fields from each record retrieved by the export.\n"},{"$ref":"#/components/schemas/Filter"}]},"inputFilter":{"allOf":[{"description":"Configuration for selectively processing input records in a lookup export.\n\nThis filter is only relevant for exports where `isLookup` is set to `true`, meaning\nthe export is being used as a flow step to retrieve additional data for records\nprocessed in previous steps.\n\n**Input filter behavior**\n\nWhen configured in a lookup export, this filter is applied to the incoming records\nbefore they are used to query the external system:\n- Only input records that match the filter criteria will trigger lookup operations\n- Records that don't match will pass through the step without being enriched\n- This can significantly improve performance by reducing unnecessary API calls\n\n**Use cases**\n\nCommon scenarios for using inputFilter include:\n- Only looking up additional data for records that meet certain criteria\n- Preventing API calls for records that already have the required data\n- Implementing conditional lookup logic based on record properties\n- Reducing API call volume to stay within rate limits\n\n**Available filter fields**\nThe fields available for filtering are the data fields from the input records\npassed to this lookup export from previous flow steps.\n"},{"$ref":"#/components/schemas/Filter"}]},"mappings":{"allOf":[{"description":"Field mapping configurations applied to the input records of a\nlookup export before the lookup HTTP request is made.\n\n**When this field is valid**\n\nCeligo only supports `mappings` on exports that meet **both**\nof the following conditions:\n\n1. `isLookup` is `true` (the export is being used as a\n   lookup step, not a source export), AND\n2. `adaptorType` is `\"HTTPExport\"` (generic REST/SOAP\n   HTTP lookup — not NetSuite, Salesforce, RDBMS, file-based,\n   or any other adaptor).\n\nFor any other combination (source exports, non-HTTP lookup\nexports), do not set this field.\n\n**Behavior when valid**\n\nWhen used on a lookup HTTP export, `mappings` transforms\neach incoming record from the upstream flow step before the\nlookup HTTP call is made:\n\n- Input records are reshaped according to the mapping rules.\n- The transformed record flows into the HTTP request — either\n  directly as the request body, or further shaped by an\n  `http.body` Handlebars template which then renders\n  against the post-mapped record.\n- Useful when the lookup target API expects a request\n  structure that differs from the upstream record shape.\n"},{"$ref":"#/components/schemas/Mappings"}]},"transform":{"allOf":[{"description":"Data transformation configuration for reshaping records during export operations.\n\n**Export-specific behavior**\n\n**Source Exports**: Transforms records retrieved from the external system before they are passed to downstream flow steps.\n\n**Lookup Exports (isLookup: true)**: Transforms the lookup results returned by the external system.\n\n**Critical requirement for lookups**\n\nFor NetSuite and most other API-based lookups, this field is **ESSENTIAL**. Raw lookup results often come in nested or complex formats that differ from what the flow requires. You **MUST** use a transform to:\n1. Flatten nested structures (e.g., `results[0].id` -> `id`)\n2. Map specific fields to the top level\n3. Handle empty results gracefully\n\nNote that transformed results are not automatically merged back into source records - merging is handled separately by the 'response mapping' configuration in your flow definition.\n"},{"$ref":"#/components/schemas/Transform"}]},"hooks":{"type":"object","description":"Defines custom JavaScript hooks that execute at specific points during the export process.\n\nThese hooks allow for programmatic intervention in the data flow, enabling custom transformations,\nvalidations, filtering, and error handling beyond what's possible with standard configuration.\n","properties":{"preSavePage":{"type":"object","description":"Hook that executes after records are retrieved from the source system but before\nthey are sent to downstream flow steps.\n\nThis hook can transform, filter, validate, or enrich each page of data before it\nenters subsequent flow steps. Common uses include flattening nested data structures,\nremoving unwanted records, or adding computed fields.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId.\n"},"_scriptId":{"type":"string","format":"objectId","description":"Reference to a predefined script resource containing hook functions.\n\nThe referenced script should contain the function specified in the\n'function' property.\n"},"_stackId":{"type":"string","format":"objectId","description":"Reference to the stack resource associated with this hook.\n\nUsed when the hook logic is part of a stack deployment.\n"},"configuration":{"type":"object","description":"Custom configuration object passed to the hook function.\n\nThis allows passing static parameters or settings to the hook script, making the script\nreusable across different exports with different configurations.\n"}}}}},"settingsForm":{"$ref":"#/components/schemas/Form"},"settings":{"$ref":"#/components/schemas/Settings"},"mockOutput":{"$ref":"#/components/schemas/MockOutput"},"_ediProfileId":{"type":"string","format":"objectId","description":"Reference to an EDI profile that this export will use for parsing X12 EDI documents.\n\nThis field contains the unique identifier of an EDI profile resource that must exist\nin the system prior to creating or updating the export. For parsing operations, the\nEDI profile provides essential settings such as:\n- Envelope-level specifications for X12 EDI documents (ISA and GS qualifiers)\n- Trading partner identifiers and qualifiers needed for validation\n- Delimiter configurations used to properly parse the document structure\n- Version information to ensure correct segment and element interpretation\n- Validation rules to verify EDI document compliance with standards\n\nIn the export context, an EDI profile is specifically required when:\n- Parsing incoming EDI documents into a structured JSON format\n- Extracting data elements from raw EDI files\n- Validating incoming EDI document structure against trading partner requirements\n- Converting EDI segments and elements into a format usable by downstream flow steps\n\nThe centralized profile approach ensures parsing consistency across all exports\nand prevents scattered configuration of parsing rules across multiple resources.\n\nFormat: 24-character hexadecimal string\n"},"_postParseListenerId":{"type":"string","format":"objectId","description":"Reference to a webhook export that will be automatically invoked after EDI parsing operations.\n\nThis field contains the unique identifier of another export resource (of type \"webhook\")\nthat will be called when an EDI file is processed, regardless of parsing success or failure.\n\n**Invocation behavior**\n\nThe listener is invoked once per file, with the following behaviors:\n\n| Scenario | Behavior |\n| --- | --- |\n| Successfully parsed EDI file | The listener is invoked with the parsed payload, with no error fields present |\n| Unable to parse EDI file | The listener is invoked with the payload and error information in the payload |\n\n**Supported adapter types**\n\nCurrently, this functionality is only supported for:\n- AS2Export (when parsing EDI files)\n- FTPExport (when parsing EDI files)\n\nSupport for additional adapters will be added in future releases.\n\n**Primary use case**\n\nThe primary purpose of this field is to enable automatic sending of functional acknowledgements\n(such as 997 or 999) after receiving EDI documents, whether the parse was successful or not.\nThis allows for immediate feedback to trading partners about document receipt and processing status.\n\nFormat: 24-character hexadecimal string\n"},"preSave":{"$ref":"#/components/schemas/PreSave"},"assistant":{"type":"string","description":"Identifier for the connector assistant used to configure this export."},"assistantMetadata":{"type":"object","additionalProperties":true,"description":"Metadata associated with the connector assistant configuration."},"sampleData":{"type":"string","description":"Sample data payload used for previewing and testing the export."},"sampleHeaders":{"type":"array","description":"Sample HTTP headers used for previewing and testing the export.","items":{"type":"object","properties":{"name":{"type":"string","description":"Header name."},"value":{"type":"string","description":"Header value."}}}},"sampleQueryParams":{"type":"array","description":"Sample query parameters used for previewing and testing the export.","items":{"type":"object","properties":{"name":{"type":"string","description":"Query parameter name."},"value":{"type":"string","description":"Query parameter value."}}}}},"required":["name"]},"GroupBy":{"type":"array","description":"Specifies which fields to use for grouping records in the export results. When configured, records with\nthe same values in these fields will be grouped together and treated as a single record by downstream\nsteps in your flow.\n\nFor example:\n- Group sales orders by customer ID to process all orders for each customer together\n- Group journal entries by accounting period to consolidate related transactions\n- Group inventory items by location to process inventory by warehouse\n\nWhen grouping is used, the export's page size determines the maximum number of groups per page, not individual\nrecords. Note that effective grouping typically requires that records with the same group field values appear\ntogether in the export data.\n","items":{"type":"string"}},"Delta":{"type":"object","description":"Configuration object for incremental data exports that retrieve only changed records.\n\nThis object is REQUIRED when the export's type field is set to \"delta\" and should not be\nincluded for other export types. Delta exports are designed for efficient synchronization\nby retrieving only records that have been created or modified since the last execution.\n\n**Default cutoff behavior (NO USER-SUPPLIED CUTOFF)**\nIf the user prompt does not specify a cutoff timestamp, delta exports MUST default to using\nthe platform-managed *last successful run* timestamp. In integrator.io this is exposed to\nHTTP exports and scripts as the `{{lastExportDateTime}}` variable.\n\n- First run: behaves like a full export (no cutoff available yet)\n- Subsequent runs: uses `{{lastExportDateTime}}` as the lower bound (cutoff)\n\nFor optimal AI agent implementation, consider these guidelines:\n\n1. Primary configuration method depends on adapter type:\n    - For HTTP exports: Use {{lastExportDateTime}} variable in relativeURI or body\n    - For specific application adapters: Use dateField to specify timestamp fields\n\n2. The system automatically maintains the last successful run timestamp\n    - No need to store or manage timestamps in your own code\n    - First run fetches all records (equivalent to a standard export)\n    - Subsequent runs use this timestamp as the starting point\n\n3. Error handling and recovery:\n    - If an export fails, the next run uses the last successful timestamp\n    - Records created/modified during a failed run will be included in the next run\n    - The lagOffset field can be used to handle edge cases\n","properties":{"dateField":{"type":"string","description":"Specifies one or more timestamp fields to filter records by modification date.\n\n**Field behavior**\n\nThis field determines which record timestamp(s) are compared against the last successful run time\nto identify changed records. Key characteristics:\n\n- REQUIRED for most adapter types (except HTTP and REST where this field is not supported)\n- Can reference a single field or multiple comma-separated fields\n- Field(s) must exist in the source system and contain valid date/time values\n- When multiple fields are specified, they are processed sequentially\n\n**Implementation patterns**\n\n**Single Field Pattern**\n```\n\"dateField\": \"lastModifiedDate\"\n```\n- Records where lastModifiedDate > last run time are exported\n- Most common pattern, suitable for most applications\n- Works when a single field reliably tracks all changes\n\n**Multiple Field Pattern**\n```\n\"dateField\": \"createdAt,lastModified\"\n```\n- First exports records where createdAt > last run time\n- Then exports records where lastModified > last run time\n- Useful when different operations update different timestamp fields\n- Handles cases where some records only have creation timestamps\n\n**Critical adaptor-specific instruction:**\n- The adaptor type is HTTP. For HTTP exports, the \"dateField\" property MUST NOT be included in the delta configuration.\n- HTTP exports use the {{lastExportDateTime}} variable directly in the relativeURI or body instead of dateField.\n- DO NOT include \"dateField\" in your response. If you include it, the configuration will be invalid.\n\nExample HTTP query with implicit delta:\n```\n\"/api/v1/users?modified_since={{lastExportDateTime}}\"\n\nExample (Business Central) newly created records:\n```\n\"/businesscentral/companies({{record.companyId}})/customers?$filter=systemCreatedAt gt {{lastExportDateTime}}\"\n```\n\nFor Salesforce, this field is required and has the following default values:\n- LastModifiedDate\n- CreatedDate\n- SystemModstamp\n- LastActivityDate\n- LastViewedDate\n- LastReferencedDate\nAlso, any custom fields that are not listed above but are timestamp fields will be added to the default values.\n```\n"},"dateFormat":{"type":"string","description":"Defines the date/time format expected by the source system's API.\n\n**Field behavior**\n\nThis field controls how the system formats the timestamp used for filtering:\n\n- OPTIONAL: Only needed when the source system doesn't support ISO8601\n- Default: ISO8601 format (YYYY-MM-DDTHH:mm:ss.sssZ)\n- Uses Moment.js formatting tokens\n- Directly affects the format of {{lastExportDateTime}} when used in HTTP requests\n\n**Implementation patterns**\n\n**Standard Date Format**\n```\n\"dateFormat\": \"YYYY-MM-DD\"  // 2023-04-15\n```\n- For APIs that accept date-only values\n- Will truncate time portion (potentially creating a wider filter window)\n\n**Custom DateTime Format**\n```\n\"dateFormat\": \"MM/DD/YYYY HH:mm:ss\"  // 04/15/2023 14:30:00\n```\n- For APIs with specific formatting requirements\n- Especially common with older or proprietary systems\n\n**Localized Format**\n```\n\"dateFormat\": \"DD-MMM-YYYY HH:mm:ss\"  // 15-Apr-2023 14:30:00\n```\n- For systems requiring locale-specific representations\n- Often needed for ERP systems or regional applications\n\nLeave this field unset unless the source system explicitly requires a non-ISO8601 format.\n"},"lagOffset":{"type":"integer","description":"Specifies a time buffer (in milliseconds) to account for system data propagation delays.\n\n**Field behavior**\n\nThis field addresses synchronization issues caused by replication or indexing delays:\n\n- OPTIONAL: Only needed for systems with known data visibility delays\n- Value is SUBTRACTED from the last successful run timestamp\n- Creates an overlapping window to catch records that were being processed\n  during the previous export\n- Measured in milliseconds (1000ms = 1 second)\n\n**Implementation pattern**\n\nThe formula for the effective filter date is:\n```\neffectiveFilterDate = lastSuccessfulRunTime - lagOffset\n```\n\n**Common values**\n\n- 15000 (15 seconds): Typical for systems with short indexing delays\n- 60000 (1 minute): Common for systems with moderate replication lag\n- 300000 (5 minutes): For systems with significant processing delays\n\n**Diagnosis**\n\nThis field should be configured when you observe:\n- Records occasionally missing from delta exports\n- Records created/modified near the export run time being skipped\n- Inconsistent results between runs with similar data changes\n\nIMPORTANT: Setting this value too high decreases efficiency by processing\nredundant records. Set only as high as needed to avoid missed records.\n"}}},"Test":{"type":"object","description":"Configuration object for limiting data volume during development and testing.\n\nThis object is REQUIRED when the export's type field is set to \"test\" and should not be\nincluded for other export types. Test exports are designed to safely retrieve small data\nsamples without processing full datasets, making them ideal for development and validation.\n\nFor optimal AI agent implementation, consider these guidelines:\n\n1. Test exports behave identically to standard exports except for the record limit\n    - All filters, pagination, and processing logic remain intact\n    - Only the total output volume is artificially capped\n\n2. Test exports do not store state between runs\n    - Unlike delta exports, each test export starts fresh\n    - No need to reset any state when transitioning from test to production\n\n3. Common implementation scenarios:\n    - During initial integration development\n    - When diagnosing data format or transformation issues\n    - When performance testing with controlled data volumes\n    - For demonstrations or proof-of-concept implementations\n\n4. Transitioning to production:\n    - Simply change type from \"test\" to null/undefined for standard exports\n    - Change type from \"test\" to \"delta\" for incremental exports\n    - No other configuration changes are typically needed\n","properties":{"limit":{"type":"integer","description":"Specifies the maximum number of records to process in a single test export run.\n\n**Field behavior**\n\nThis field controls the data volume during test executions:\n\n- REQUIRED when the export's type field is set to \"test\"\n- Accepts integer values between 1 and 100\n- Enforced by the system regardless of pagination settings\n- Applies to top-level records (before oneToMany processing)\n\n**Implementation considerations**\n\n**Balance between volume and usefulness**\n\nThe ideal limit depends on your testing objectives:\n\n- 1-5 records: Good for initial implementation and format verification\n- 10-25 records: Useful for testing transformation logic and identifying edge cases\n- 50-100 records: Better for performance testing and data pattern analysis\n\n**System enforced maximum**\n\n```\n\"limit\": 100  // Maximum allowed value\n```\n\nThe system enforces a hard limit of 100 records for all test exports to prevent\naccidental processing of large datasets during development.\n\n**Relationship with pageSize**\n\nThe test limit overrides but does not replace the export's pageSize:\n\n- If limit < pageSize: Only one page is processed with limit records\n- If limit > pageSize: Multiple pages are processed until limit is reached\n- Either way, the total records processed will not exceed the limit value\n\nIMPORTANT: When transitioning from test to production, you don't need to remove\nthis configuration - simply change the export's type field to remove the test limit.\n","minimum":1,"maximum":100}}},"Once":{"type":"object","description":"Configuration object for flag-based exports that process records exactly once.\n\nThis object is REQUIRED when the export's type field is set to \"once\" and should not be\nincluded for other export types. Once exports use a boolean/checkbox field in the source system\nto track which records have been processed, creating a reliable idempotent data extraction pattern.\n\nFor optimal AI agent implementation, consider these guidelines:\n\n1. System behavior during execution:\n    - First, the export retrieves all records where the specified boolean field is false\n    - After successfully processing these records, the system automatically sets the field to true\n    - On subsequent runs, previously processed records are excluded\n\n2. Prerequisites in the source system:\n    - The source must have a boolean/checkbox field that can be used as a processing flag\n    - Your connection must have write access to update this field after export\n    - The field should be indexed for optimal performance\n\n3. Common implementation scenarios:\n    - One-time migrations where data should not be duplicated\n    - Processing queues where records are marked as \"processed\"\n    - Compliance scenarios requiring audit trails of exported records\n    - Implementing exactly-once delivery semantics\n\n4. Error handling behavior:\n    - If the export fails, the boolean fields remain unchanged\n    - Records will be retried on the next run\n    - No manual intervention is required for recovery\n","properties":{"booleanField":{"type":"string","description":"Specifies the API field name of the boolean/checkbox that tracks processed records.\n\n**Field behavior**\n\nThis field identifies which boolean field in the source system controls the export filtering:\n\n- REQUIRED when the export's type field is set to \"once\"\n- Must reference a valid boolean/checkbox field in the source system\n- Must be writeable by the connection's authentication credentials\n- The system performs two operations with this field:\n  1. Filters to only include records where this field is false\n  2. Updates processed records by setting this field to true\n\n**Implementation patterns**\n\n**Using dedicated tracking fields**\n```\n\"booleanField\": \"isExported\"\n```\n- Create a dedicated field specifically for integration tracking\n- Provides clear separation between business and integration logic\n- Most maintainable approach for long-term operations\n\n**Using existing status fields**\n```\n\"booleanField\": \"isProcessed\"\n```\n- Leverage existing status fields if they align with your integration needs\n- Ensure the field's meaning is compatible with your integration logic\n- Consider potential conflicts with other processes using the same field\n\n**Targeted export tracking**\n```\n\"booleanField\": \"exported_to_netsuite\"\n```\n- For systems synchronizing to multiple destinations\n- Create separate tracking fields for each destination system\n- Enables independent control of different export processes\n\n**Technical considerations**\n\n- Field updates happen in batches after each successful page of records is processed\n- The field update uses the same connection as the export operation\n- For optimal performance, the boolean field should be indexed in the source database\n- Boolean values of 0/1, true/false, and yes/no are all properly interpreted\n\nIMPORTANT: Ensure the field is not being updated by other processes, as this could\ncause records to be skipped unexpectedly. If multiple processes need to track exports,\nuse separate boolean fields for each process.\n"}}},"Webhook":{"type":"object","description":"Configuration object for real-time event listeners that receive data via incoming HTTP requests.\n\nThis object is REQUIRED when the export's type field is set to \"webhook\" and should not be\nincluded for other export types. Webhook exports create dedicated HTTP endpoints that can receive\ndata from external systems in real-time, enabling event-driven integration architectures.\n\nWhen configured, the system:\n1. Creates a unique URL endpoint for receiving HTTP requests\n2. Validates incoming requests based on your security configuration\n3. Processes the payload and passes it to subsequent flow steps\n4. Returns a configurable HTTP response to the caller\n\nFor optimal AI agent implementation, consider these guidelines:\n\n**Webhook security models**\n\nWebhooks support multiple security verification methods, each requiring different fields:\n\n1. **HMAC Verification** (Most secure, recommended for production)\n    - Required fields: verify=\"hmac\", key, algorithm, encoding, header\n    - Verifies a cryptographic signature included with each request\n    - Ensures data integrity and authenticity\n\n2. **Token Verification** (Simple shared secret)\n    - Required fields: verify=\"token\", token, path\n    - Checks for a specific token value in the request\n    - Simpler but less secure than HMAC\n\n3. **Basic Authentication** (HTTP standard)\n    - Required fields: verify=\"basic\", username, password\n    - Uses HTTP Basic Authentication headers\n    - Compatible with most HTTP clients\n\n4. **Secret URL** (Simplest but least secure)\n    - Required fields: verify=\"secret_url\", token\n    - Relies solely on URL obscurity for security\n    - The token is embedded in the webhook URL to create a unique, hard-to-guess endpoint\n    - Suitable only for non-sensitive data or testing\n\n5. **Public Key** (Advanced, for specific providers)\n    - Required fields: verify=\"publickey\", key\n    - Uses public key cryptography for verification\n    - Only available for certain providers\n\n**Response customization**\n\nYou can customize how the webhook responds to callers with these field groups:\n\n1. **Standard Success Response**\n    - Fields: successStatusCode, successBody, successMediaType, successResponseHeaders\n    - Controls how the webhook responds to valid requests\n\n2. **Challenge Response** (For subscription verification)\n    - Fields: challengeSuccessBody, challengeSuccessStatusCode, challengeSuccessMediaType, challengeResponseHeaders\n    - Controls how the webhook responds to verification/challenge requests\n\n**Implementation scenarios**\n\nWebhooks are commonly used for:\n\n1. **Real-time data synchronization**\n    - E-commerce platforms sending order notifications\n    - CRM systems delivering contact updates\n    - Payment processors reporting transaction events\n\n2. **Event-driven processes**\n    - Triggering fulfillment when orders are placed\n    - Initiating approval workflows on document submissions\n    - Executing business logic when status changes occur\n\n3. **System integration**\n    - Connecting SaaS applications without polling\n    - Building composite applications from microservices\n    - Creating fan-out architectures for event distribution\n","properties":{"provider":{"type":"string","description":"Specifies the source application sending webhook data, enabling platform-specific optimizations.\n\n**Field behavior**\n\nThis field determines how the webhook handles incoming requests:\n\n- OPTIONAL: Defaults to \"custom\" if not specified\n- When a specific provider is selected, the system:\n  1. Pre-configures appropriate security settings for that platform\n  2. Applies platform-specific payload parsing rules\n  3. May enable additional features only relevant to that provider\n\n**Implementation guidance**\n\n**Provider-specific configurations**\n\nWhen you know the exact source system, select its specific provider:\n```\n\"provider\": \"shopify\"\n```\n- Automatically configures proper HMAC verification settings\n- Optimizes payload parsing for Shopify's webhook format\n- May enable additional Shopify-specific features\n\n**Custom configuration**\n\nFor generic webhooks or unlisted providers, use custom:\n```\n\"provider\": \"custom\"\n```\n- Requires manual configuration of all security settings\n- Maximum flexibility for handling any webhook format\n- Recommended for custom applications or newer platforms\n\n**Selection criteria**\n\nChoose a specific provider when:\n- The source system is explicitly listed in the enum values\n- You want to leverage pre-configured settings\n- The integration must follow platform-specific practices\n\nChoose \"custom\" when:\n- The source system is not listed\n- You need full control over webhook configuration\n- You're building a custom interface or protocol\n\nIMPORTANT: Some providers enforce specific security methods. When selecting a\nprovider, ensure you have the necessary security credentials (tokens, keys, etc.)\nas required by that platform.\n","enum":["github","shopify","travis","travis-org","slack","dropbox","onfleet","helpscout","errorception","box","stripe","aha","jira","pagerduty","postmark","mailchimp","intercom","activecampaign","segment","recurly","shipwire","surveymonkey","parseur","mailparser-io","hubspot","integrator-extension","custom","sapariba","happyreturns","typeform"]},"verify":{"type":"string","description":"Defines the security verification method used to authenticate incoming webhook requests.\n\n**Field behavior**\n\nThis field is the primary control for webhook security:\n\n- REQUIRED for all webhook exports\n- Determines which additional security fields must be configured\n- Controls how incoming requests are validated before processing\n\n**Verification methods**\n\n**Hmac Verification**\n```\n\"verify\": \"hmac\"\n```\n- Most secure method, cryptographically verifies request integrity\n- REQUIRES: key, algorithm, encoding, header fields\n- Validates a cryptographic signature included in the request header\n- Works well with providers that support HMAC (Shopify, Stripe, GitHub, etc.)\n\n**Token Verification**\n```\n\"verify\": \"token\"\n```\n- Simple verification using a shared secret token\n- REQUIRES: token, path fields\n- Checks for a specific token value in the request body or query params\n- Good for simple scenarios with trusted networks\n\n**Basic Authentication**\n```\n\"verify\": \"basic\"\n```\n- Standard HTTP Basic Authentication\n- REQUIRES: username, password fields\n- Validates credentials sent in the Authorization header\n- Compatible with most HTTP clients and tools\n\n**Public Key**\n```\n\"verify\": \"publickey\"\n```\n- Advanced verification using public key cryptography\n- REQUIRES: key field (containing the public key)\n- Only available for certain providers that use asymmetric cryptography\n- Highest security level but more complex to configure\n\n**Secret url**\n```\n\"verify\": \"secret_url\"\n```\n- Simplest method, relies solely on the obscurity of the URL\n- REQUIRES: token field (the token is embedded in the webhook URL to create a unique, hard-to-guess endpoint)\n- Only suitable for non-sensitive data or testing environments\n- Not recommended for production use with sensitive data\n\nIMPORTANT: Choose the security method that matches your source system's capabilities.\nIf the source system supports multiple verification methods, HMAC is generally the\nmost secure option.\n","enum":["token","hmac","basic","publickey","secret_url"]},"token":{"type":"string","description":"Specifies the shared secret token value used to verify incoming webhook requests.\n\n**Field behavior**\n\nThis field defines the expected token value:\n\n- REQUIRED when verify=\"token\" or verify=\"secret_url\"\n- When verify=\"token\": must be a string that exactly matches what the sender will provide. Used with the path field to locate and validate the token in the request.\n- When verify=\"secret_url\": the token is embedded in the webhook URL to create a unique, hard-to-guess endpoint. Generate a random, high-entropy value.\n- Case-sensitive and whitespace-sensitive\n\n**Implementation guidance**\n\nThe token verification flow works as follows:\n1. The webhook receives an incoming request\n2. The system looks for the token at the location specified by the path field\n3. If the found value exactly matches this token value, the request is processed\n4. If no match is found, the request is rejected with a 401 error\n\n**Security best practices**\n\nFor maximum security:\n- Use a random, high-entropy token (32+ characters)\n- Include a mix of uppercase, lowercase, numbers, and special characters\n- Don't use predictable values like company names or common words\n- Rotate tokens periodically for sensitive integrations\n\n**Common implementations**\n\n```\n\"token\": \"3a7c4f8b2e9d1a5c6b3e7d9f2a1c5b8e\"\n```\n\n```\n\"token\": \"whsec_8fb2e91a5c6b3e7d9f2a1c5b8e3a7c4f\"\n```\n\nIMPORTANT: Never share this token in public repositories or documentation.\nTreat it as a sensitive credential similar to a password.\n"},"algorithm":{"type":"string","description":"Specifies the cryptographic hashing algorithm used for HMAC signature verification.\n\n**Field behavior**\n\nThis field determines how signatures are validated:\n\n- REQUIRED when verify=\"hmac\"\n- Must match the algorithm used by the webhook sender\n- Affects security strength and compatibility\n\n**Algorithm selection**\n\n**Sha-256 (Recommended)**\n```\n\"algorithm\": \"sha256\"\n```\n- Modern, secure hash algorithm\n- Industry standard for most new webhook implementations\n- Preferred choice for all new integrations\n- Used by Shopify, Stripe, and many modern platforms\n\n**Sha-1 (Legacy)**\n```\n\"algorithm\": \"sha1\"\n```\n- Older, less secure algorithm\n- Still used by some legacy systems\n- Only select if the provider explicitly requires it\n- GitHub webhooks used this historically\n\n**Sha-384/sha-512 (High Security)**\n```\n\"algorithm\": \"sha384\"\n\"algorithm\": \"sha512\"\n```\n- Higher security variants with longer digests\n- Use when specified by the provider or for sensitive data\n- Less common but supported by some security-focused systems\n\nIMPORTANT: This MUST match the algorithm used by the webhook sender.\nMismatched algorithms will cause all webhook requests to be rejected.\n","enum":["sha1","sha256","sha384","sha512"]},"encoding":{"type":"string","description":"Specifies the encoding format used for the HMAC signature in webhook requests.\n\n**Field behavior**\n\nThis field determines how signature values are encoded:\n\n- REQUIRED when verify=\"hmac\"\n- Must match the encoding used by the webhook sender\n- Affects how binary signature values are represented as strings\n\n**Encoding options**\n\n**Hexadecimal (hex)**\n```\n\"encoding\": \"hex\"\n```\n- Represents the signature as a string of hexadecimal characters (0-9, a-f)\n- Most common encoding for web-based systems\n- Used by many platforms including Stripe and some Shopify implementations\n- Example output: \"8f7d56a32e1c9b47d882e3aa91341f64\"\n\n**Base64**\n```\n\"encoding\": \"base64\"\n```\n- Represents the signature using base64 encoding\n- More compact than hex (about 33% shorter)\n- Used by platforms like Shopify (newer implementations) and some GitHub scenarios\n- Example output: \"j31WozbhtrHYeC46qRNB9k==\"\n\nIMPORTANT: This MUST match the encoding used by the webhook sender.\nMismatched encoding will cause all webhook requests to be rejected even if\nthe signature is mathematically correct.\n","enum":["hex","base64"]},"key":{"type":"string","description":"Specifies the secret key used to verify cryptographic signatures in incoming webhooks.\n\n**Field behavior**\n\nThis field provides the shared secret for signature verification:\n\n- REQUIRED when verify=\"hmac\" or verify=\"publickey\"\n- Contains the secret value known to both sender and receiver\n- Used with the incoming payload to validate the signature\n- Highly sensitive security credential\n\n**Implementation guidance**\n\n**For hmac verification**\n\nThe key is used in the following verification process:\n1. The webhook receives an incoming request with a signature\n2. The system computes an HMAC of the request body using this key and the specified algorithm\n3. This computed signature is compared with the signature from the request header\n4. If they match exactly, the request is authenticated and processed\n\n**Security best practices**\n\nFor maximum security:\n- Use a random, high-entropy key (32+ characters)\n- Include a mix of characters and avoid dictionary words\n- Never share this key in code repositories or logs\n- Rotate keys periodically for sensitive integrations\n- Use environment variables or secure credential storage\n\n**Common implementations**\n\n```\n\"key\": \"whsec_3a7c4f8b2e9d1a5c6b3e7d9f2a1c5b8e3a7c4f8b\"\n```\n\n```\n\"key\": \"sk_test_51LZIr9B9Y6YIwSKx8647589JKhdjs889KJsk389\"\n```\n\nIMPORTANT: This key should be treated as a highly sensitive credential,\nsimilar to a private key or password. It should never be exposed publicly\nor logged in application logs.\n"},"header":{"type":"string","description":"Specifies the HTTP header name that contains the signature for HMAC verification.\n\n**Field behavior**\n\nThis field identifies where to find the signature in incoming requests:\n\n- REQUIRED when verify=\"hmac\"\n- Must exactly match the header name used by the webhook sender\n- Case-insensitive (HTTP headers are not case-sensitive)\n\n**Common header patterns**\n\n**Platform-specific headers**\n\nMany platforms use standardized header names for their signatures:\n\n```\n\"header\": \"X-Shopify-Hmac-SHA256\"  // For Shopify webhooks\n```\n\n```\n\"header\": \"X-Hub-Signature-256\"    // For GitHub webhooks\n```\n\n```\n\"header\": \"Stripe-Signature\"        // For Stripe webhooks\n```\n\n**Generic signature headers**\n\nFor custom implementations or less common platforms:\n\n```\n\"header\": \"X-Webhook-Signature\"     // Common generic format\n```\n\n```\n\"header\": \"X-Signature\"             // Simplified format\n```\n\n**Implementation notes**\n\n- The system will look for this exact header name in incoming requests\n- If the header is not found, the request will be rejected with a 401 error\n- Some platforms may include a prefix in the header value (e.g., \"sha256=\")\n  which is handled automatically by the system\n\nIMPORTANT: This must exactly match the header name used by the webhook sender.\nIf you're unsure about the correct header name, consult the sender's documentation\nor use a tool like cURL with verbose output to inspect an example request.\n"},"path":{"type":"string","description":"Specifies the location of the verification token in incoming webhook requests.\n\n**Field behavior**\n\nThis field determines where to find the token for verification:\n\n- REQUIRED when verify=\"token\"\n- Defines a JSON path to locate the token in the request body\n- For query parameters, use the appropriate path format (typically at root level)\n\n**Implementation patterns**\n\n**Token in request body**\n\nFor tokens embedded in JSON payloads:\n\n```\n\"path\": \"meta.token\"        // For { \"meta\": { \"token\": \"xyz123\" } }\n```\n\n```\n\"path\": \"verification.key\"  // For { \"verification\": { \"key\": \"xyz123\" } }\n```\n\n**Token at root level**\n\nFor tokens in the top level of the request:\n\n```\n\"path\": \"token\"             // For { \"token\": \"xyz123\", \"data\": {...} }\n```\n\n**Token in query parameters**\n\nFor tokens sent as URL query parameters, use the parameter name:\n\n```\n\"path\": \"verify_token\"      // For /webhook?verify_token=xyz123\n```\n\n**Verification process**\n\n1. The webhook receives an incoming request\n2. The system uses this path to extract the token value\n3. The extracted value is compared with the configured token\n4. If they match exactly, the request is processed\n\nIMPORTANT: The path is case-sensitive and must exactly match the structure\nof incoming requests. For query parameters, the system automatically checks\nboth the body and query string using the provided path.\n"},"username":{"type":"string","description":"Specifies the username for webhook HTTP Basic Authentication security.\n\n**Field behavior**\n\nThis field defines one half of the Basic Authentication credentials:\n\n- REQUIRED when verify=\"basic\"\n- Used in conjunction with the password field\n- Case-sensitive string value\n- Encoded in the standard HTTP Basic Authentication format\n\n**Implementation notes**\n\nWhen Basic Authentication is used, the webhook requires incoming requests to include\nan Authorization header containing \"Basic \" followed by a base64-encoded string of\n\"username:password\".\n\nExample header:\n```\nAuthorization: Basic d2ViaG9va191c2VyOndlYmhvb2tfcGFzc3dvcmQ=\n```\n\nWhere \"d2ViaG9va191c2VyOndlYmhvb2tfcGFzc3dvcmQ=\" is the base64 encoding of\n\"webhook_user:webhook_password\".\n\n**Security considerations**\n\nBasic Authentication:\n- Is widely supported by HTTP clients and servers\n- Should ONLY be used over HTTPS to prevent credential interception\n- Provides a simple authentication mechanism but without integrity verification\n- Is less secure than HMAC verification for webhook scenarios\n\nIMPORTANT: Always use strong, unique credentials rather than generic or easily\nguessable values. Basic Authentication is less secure than HMAC for webhooks\nbut can be appropriate for simple scenarios or when working with systems that\ndon't support more advanced verification methods.\n"},"password":{"type":"string","description":"Specifies the password for webhook HTTP Basic Authentication security.\n\n**Field behavior**\n\nThis field defines the second half of the Basic Authentication credentials:\n\n- REQUIRED when verify=\"basic\"\n- Used in conjunction with the username field\n- Case-sensitive string value\n- Encoded in the standard HTTP Basic Authentication format\n\n**Implementation notes**\n\nThis password is combined with the username and encoded in base64 format for\nthe HTTP Authorization header. The webhook verifies that incoming requests contain\nthe correct encoded credentials before processing them.\n\n**Security best practices**\n\nFor maximum security:\n- Use a strong, randomly generated password (16+ characters)\n- Include a mix of uppercase, lowercase, numbers, and special characters\n- Don't reuse passwords from other systems\n- Avoid dictionary words or predictable patterns\n- Rotate passwords periodically for sensitive integrations\n\nIMPORTANT: This password should be treated as a sensitive credential.\nNever share it in public repositories, documentation, or logs. Always use\nHTTPS for webhooks using Basic Authentication to prevent credential interception.\n"},"successStatusCode":{"type":"integer","description":"Specifies the HTTP status code sent back to webhook callers after successful processing.\n\n**Field behavior**\n\nThis field controls the HTTP response status code:\n\n- OPTIONAL: Defaults to 204 (No Content) if not specified\n- Affects how webhook callers interpret the success response\n- Must be a valid HTTP status code in the 2xx range\n\n**Common status codes**\n\n**204 No Content (Default)**\n```\n\"successStatusCode\": 204\n```\n- Returns no response body\n- Most efficient option as it minimizes response size\n- Appropriate when the caller doesn't need confirmation details\n- Automatically disables successBody (even if specified)\n\n**200 ok**\n```\n\"successStatusCode\": 200\n```\n- Standard success response\n- Allows returning a response body with details\n- Most widely used and recognized success code\n- Compatible with all HTTP clients\n\n**202 Accepted**\n```\n\"successStatusCode\": 202\n```\n- Indicates request was accepted for processing but may not be complete\n- Appropriate for asynchronous processing scenarios\n- Signals that the webhook was received but full processing is pending\n\n**Implementation considerations**\n\nThe appropriate status code depends on your webhook caller's expectations:\n\n- Some systems require specific status codes to consider the delivery successful\n- If the caller retries on anything other than 2xx, use 200 or 202\n- If the caller needs confirmation details, use 200 with a response body\n- If efficiency is paramount, use 204 (default)\n\nIMPORTANT: When using 204 No Content, any successBody configuration will be ignored\nas this status code specifically indicates no response body is being returned.\n","default":204},"successBody":{"type":"string","description":"Specifies the HTTP response body sent back to webhook callers after successful processing.\n\n**Field behavior**\n\nThis field controls the content returned to the webhook caller:\n\n- OPTIONAL: Defaults to empty (no body) if not specified\n- Ignored when successStatusCode is 204 (No Content)\n- Content type is determined by the successMediaType field\n- Can contain static text or structured data (JSON, XML)\n\n**Implementation patterns**\n\n**Simple acknowledgment**\n```\n\"successBody\": \"OK\"\n```\n- Minimal plaintext response\n- Confirms receipt without details\n- Most efficient for basic acknowledgment\n\n**Structured response (JSON)**\n```\n\"successBody\": \"{\\\"success\\\":true,\\\"message\\\":\\\"Webhook received\\\"}\"\n```\n- Provides structured data about the result\n- Can include more detailed status information\n- Compatible with programmatic processing by the caller\n- Remember to escape quotes in JSON strings\n\n**Custom confirmation**\n```\n\"successBody\": \"{\\\"status\\\":\\\"received\\\",\\\"timestamp\\\":\\\"{{currentDateTime}}\\\"}\"\n```\n- Can include dynamic values using handlebars templates\n- Useful for providing receipt confirmation with metadata\n\n**Response flow**\n\nThe response body is sent after the webhook payload has been:\n1. Received and authenticated\n2. Validated against any configured requirements\n3. Accepted for processing by the system\n\nIMPORTANT: The successBody will only be returned if successStatusCode is NOT 204.\nIf you want to return a body, make sure to set successStatusCode to 200, 201, or 202.\n"},"successMediaType":{"type":"string","description":"Specifies the Content-Type header for successful webhook responses.\n\n**Field behavior**\n\nThis field controls how the response body is interpreted:\n\n- OPTIONAL: Defaults to \"json\" if not specified\n- Only relevant when returning a successBody and not using status code 204\n- Determines the Content-Type header in the HTTP response\n- Must be consistent with the actual format of the successBody\n\n**Media type options**\n\n**Json (Default)**\n```\n\"successMediaType\": \"json\"\n```\n- Sets Content-Type: application/json\n- Use when successBody contains valid JSON\n- Most common for API responses\n- Allows structured data that clients can parse programmatically\n\n**Xml**\n```\n\"successMediaType\": \"xml\"\n```\n- Sets Content-Type: application/xml\n- Use when successBody contains valid XML\n- Necessary for systems expecting XML responses\n- Less common in modern APIs but still used in some enterprise systems\n\n**Plain Text**\n```\n\"successMediaType\": \"plaintext\"\n```\n- Sets Content-Type: text/plain\n- Use for simple string responses\n- Most compatible option for basic acknowledgments\n- Appropriate when successBody is just \"OK\" or similar\n\n**Implementation considerations**\n\n- The media type must match the actual content format in successBody\n- If returning JSON in successBody, use \"json\" (most common)\n- If returning a simple text acknowledgment, use \"plaintext\"\n- If the caller specifically requires XML, use \"xml\"\n\nIMPORTANT: When successStatusCode is 204 (No Content), this field has no effect\nsince no body is returned, and therefore no Content-Type is needed.\n","default":"json","enum":["json","xml","plaintext"]},"successResponseHeaders":{"type":"array","description":"Defines custom HTTP headers to include in successful webhook responses.\n\n**Field behavior**\n\nThis field allows additional HTTP headers in the response:\n\n- OPTIONAL: If omitted, only standard headers are included\n- Each entry defines a name/value pair for a single header\n- Applied to all successful responses (regardless of status code)\n- Can override standard headers like Content-Type\n\n**Implementation patterns**\n\n**Standard use cases**\n\nCustom headers are useful for:\n- Providing metadata about the response\n- Enabling CORS for browser-based webhook callers\n- Including tracking or correlation IDs\n- Adding custom security headers\n\n**Common header examples**\n\nCORS support:\n```json\n[\n  {\"name\": \"Access-Control-Allow-Origin\", \"value\": \"*\"},\n  {\"name\": \"Access-Control-Allow-Methods\", \"value\": \"POST, OPTIONS\"}\n]\n```\n\nRequest tracking:\n```json\n[\n  {\"name\": \"X-Request-ID\", \"value\": \"{{jobId}}\"},\n  {\"name\": \"X-Webhook-Received\", \"value\": \"{{currentDateTime}}\"}\n]\n```\n\nCustom application headers:\n```json\n[\n  {\"name\": \"X-API-Version\", \"value\": \"1.0\"},\n  {\"name\": \"X-Processing-Status\", \"value\": \"accepted\"}\n]\n```\n\n**Technical details**\n\n- Header names are case-insensitive as per HTTP specification\n- Some headers like Content-Type can be set via other fields (successMediaType)\n- Headers defined here take precedence over automatically set headers\n- The values can contain handlebars expressions for dynamic content\n\nIMPORTANT: Be careful when setting security-related headers like\nAccess-Control-Allow-Origin, as improper values could create security vulnerabilities.\n","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}}},"challengeResponseHeaders":{"type":"array","description":"Defines custom HTTP headers to include in webhook challenge responses.\n\n**Field behavior**\n\nThis field configures headers for subscription verification:\n\n- OPTIONAL: If omitted, only standard headers are included\n- Only used for webhook verification/challenge requests\n- Each entry defines a name/value pair for a single header\n- Particularly important for platforms requiring specific verification headers\n\n**Challenge verification context**\n\nMany webhook providers implement a verification process:\n1. Before sending real events, they send a \"challenge\" request\n2. The webhook must respond with specific headers and/or body content\n3. Only after successful verification will real webhook events be sent\n\nThis field allows customizing the headers sent during this verification step.\n\n**Common patterns by platform**\n\n**Facebook/Instagram**\n```json\n[\n  {\"name\": \"Content-Type\", \"value\": \"text/plain\"}\n]\n```\n\n**Slack**\n```json\n[\n  {\"name\": \"Content-Type\", \"value\": \"application/json\"}\n]\n```\n\n**Custom implementations**\n```json\n[\n  {\"name\": \"X-Challenge-Response\", \"value\": \"passed\"},\n  {\"name\": \"X-Verification-Status\", \"value\": \"success\"}\n]\n```\n\nIMPORTANT: The specific headers required vary by platform. Consult the webhook\nprovider's documentation for the exact verification requirements. Incorrect challenge\nresponse headers may prevent successful webhook subscription.\n","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}}},"challengeSuccessBody":{"type":"string","description":"Specifies the HTTP response body for webhook challenge/verification requests.\n\n**Field behavior**\n\nThis field defines the verification response content:\n\n- OPTIONAL: If omitted, a default empty response is sent\n- Only used for webhook subscription verification requests\n- Content type is determined by the challengeSuccessMediaType field\n- Often needs to contain specific values expected by the webhook provider\n\n**Verification patterns by platform**\n\nDifferent webhook providers implement different verification mechanisms:\n\n**Facebook/Instagram**\n                \"challengeSuccessBody\": \"{{hub.challenge}}\"\n```\n- Must echo back the challenge value sent in the request\n- Uses handlebars expression to access the challenge parameter\n\n**Slack**\n```\n\"challengeSuccessBody\": \"{\\\"challenge\\\":\\\"{{challenge}}\\\"}\"\n```\n- Returns the challenge value in a JSON structure\n- Required for Slack's Events API verification\n\n**Generic challenge-response**\n```\n\"challengeSuccessBody\": \"{\\\"verified\\\":true,\\\"timestamp\\\":\\\"{{currentDateTime}}\\\"}\"\n```\n- Simple confirmation response for custom implementations\n- Can include additional metadata as needed\n\n**Implementation considerations**\n\n- The exact format is dictated by the webhook provider's requirements\n- Some platforms require echoing back specific request parameters\n- Others require a structured response with specific fields\n- Handlebars expressions ({{variable}}) can access request data\n\nIMPORTANT: Incorrect challenge responses will prevent webhook subscription verification.\nAlways consult the webhook provider's documentation for exact requirements.\n"},"challengeSuccessStatusCode":{"type":"integer","description":"Specifies the HTTP status code for webhook challenge/verification responses.\n\n**Field behavior**\n\nThis field controls the verification response status:\n\n- OPTIONAL: Defaults to 200 (OK) if not specified\n- Only used for webhook subscription verification requests\n- Must match what the webhook provider expects for successful verification\n- Most platforms require a 200 OK response\n\n**Common status codes for verification**\n\n**200 ok (Default)**\n```\n\"challengeSuccessStatusCode\": 200\n```\n- Standard success response\n- Most webhook platforms expect this status code\n- Generally the safest option for verification\n\n**201 Created**\n```\n\"challengeSuccessStatusCode\": 201\n```\n- Used by some systems to indicate subscription was created\n- Less common for verification but used in some custom implementations\n\n**Platform-specific requirements**\n\nMost major webhook providers require specific status codes:\n\n- Facebook/Instagram: 200\n- Slack: 200\n- GitHub: 200\n- Shopify: 200\n- Stripe: 200\n\nIMPORTANT: Using the wrong status code will cause the verification to fail.\nIf you're unsure, keep the default 200 status code, as it's the most widely\naccepted for webhook verifications.\n","default":200},"challengeSuccessMediaType":{"type":"string","description":"Specifies the Content-Type header for webhook challenge/verification responses.\n\n**Field behavior**\n\nThis field controls the challenge response format:\n\n- OPTIONAL: Defaults to \"json\" if not specified\n- Only used for webhook subscription verification requests\n- Determines the Content-Type header in the verification response\n- Must match the format of the challengeSuccessBody content\n\n**Common media types for verification**\n\n**Json (Default)**\n```\n\"challengeSuccessMediaType\": \"json\"\n```\n- Sets Content-Type: application/json\n- Required by Slack and many modern webhook providers\n- Use when returning structured verification data\n\n**Plain Text**\n```\n\"challengeSuccessMediaType\": \"plaintext\"\n```\n- Sets Content-Type: text/plain\n- Required by Facebook/Instagram webhook verification\n- Use when the challenge response is a simple string\n\n**Xml**\n```\n\"challengeSuccessMediaType\": \"xml\"\n```\n- Sets Content-Type: application/xml\n- Less common but used by some enterprise systems\n- Use only when the webhook provider specifically requires XML\n\n**Platform-specific requirements**\n\n- Facebook/Instagram: plaintext (when echoing hub.challenge)\n- Slack: json (for Events API verification)\n- Most modern APIs: json\n\nIMPORTANT: The media type must match both the format of your challengeSuccessBody\nand the requirements of the webhook provider. Mismatched content types can cause\nverification to fail even if the response body is correct.\n","default":"json","enum":["json","xml","plaintext"]}}},"Distributed":{"type":"object","description":"Configuration object for distributed exports that require authentication.\n\nThis object contains authentication credentials needed for distributed processing.\n","properties":{"bearerToken":{"type":"string","description":"Bearer token for authenticating distributed export requests.\n\n**Field behavior**\n\nThis token provides authentication for the distributed export:\n\n- Required for secure access to distributed endpoints\n- Must be a valid bearer token format\n- Used in Authorization header as \"Bearer {token}\"\n- Should be kept secure and rotated regularly\n\n**Implementation guidance**\n\n**Token management**\n\n- Store tokens securely (encrypted at rest)\n- Implement token rotation policies\n- Monitor token expiration dates\n- Use environment variables for token storage\n\n**Security considerations**\n\n- Never log bearer tokens in plain text\n- Implement proper access controls\n- Use HTTPS for all token transmissions\n- Validate tokens on each request\n\nIMPORTANT: Bearer tokens provide full access to the distributed export.\nTreat them as sensitive credentials.\n","format":"password"}},"required":["bearerToken"],"additionalProperties":false},"FileSystem-2":{"type":"object","description":"Configuration for FileSystem exports","properties":{"directoryPath":{"type":"string","description":"Directory path to retrieve files from (required)"}},"required":["directoryPath"]},"File-2":{"type":"object","description":"Configuration for file processing in exports. This object defines how files are parsed,\nfiltered, and processed across all file-based export operations within Celigo.\n\n**Export contexts**\n\nThis schema applies to multiple file-based export scenarios:\n\n1. **Source System Types**:\n    - Simple exports with file uploads through the UI\n    - HTTP exports retrieving files from web sources\n    - FTP/SFTP exports downloading files from servers\n    - Amazon S3\n    - Azure Blob Storage\n    - Google Cloud Storage\n    - And other file-based source systems\n\n**Implementation guidelines**\n\nAI agents should consider these key decision points when configuring file processing for exports:\n\n1. **File Format Selection**: Set the `type` field to match the format of the files being processed\n    (csv, json, xlsx, xml). This determines which format-specific configuration object to populate.\n\n2. **Processing Mode**: Set the `output` field based on whether you need to:\n    - Parse file contents into records (`\"records\"`)\n    - Transfer files without parsing (`\"blobKeys\"`)\n    - Only retrieve metadata about files (`\"metadata\"`)\n\n3. **File Filtering**: Use the `filter` object to selectively process files based on criteria\n    like file names, sizes, or custom logic.\n\n4. **Format-Specific Configuration**: Configure the corresponding object (csv, json, xlsx, xml)\n    based on the selected file type.\n\n**Field dependencies**\n\n- When `type` = \"csv\", configure the `csv` object\n- When `type` = \"json\", configure the `json` object\n- When `type` = \"xlsx\", configure the `xlsx` object\n- When `type` = \"xml\", configure the `xml` object\n- When `type` = \"filedefinition\", configure the `fileDefinition` object\n\n**Export-specific considerations**\n\nWhile the file processing configuration remains consistent, different export types may have\nadditional requirements:\n\n- **HTTP Exports**: May need authentication and specific endpoint configurations\n- **FTP/SFTP Exports**: Require server credentials and path information\n- **Cloud Storage Exports**: Need bucket/container details and access credentials\n\nThe File schema focuses specifically on how files are processed once they are\nretrieved from the source system, regardless of which export type is used.\n","properties":{"encoding":{"type":"string","description":"Character encoding used for reading and parsing file content. This setting is critical for ensuring proper character interpretation, especially for international data and special characters.\n\n**Encoding options and usage guidance**\n\n**Utf-8 (`\"utf8\"`)**\n- **Default Setting**: Used if no encoding is specified\n- **Best For**: Modern text files, international character sets, XML/JSON files\n- **Compatibility**: Universally supported; standard for web applications\n- **When to Use**: First choice for most new integrations; handles most languages\n\n**Windows-1252 (`\"win1252\"`)**\n- **Best For**: Legacy Windows system files, older Western European data\n- **Compatibility**: Common in Windows-based exports, especially older systems\n- **When to Use**: When files originate from older Windows systems or contain certain special characters not rendering properly in utf8\n\n**Utf-16le (`\"utf-16le\"`)**\n- **Best For**: Unicode text with extensive character requirements\n- **Compatibility**: Microsoft Word documents, some database exports\n- **When to Use**: When files have Byte Order Mark (BOM) or are known to be 16-bit Unicode\n\n**Gb18030 (`\"gb18030\"`)**\n- **Best For**: Chinese character sets\n- **Compatibility**: Official character set standard for China\n- **When to Use**: For files containing simplified or traditional Chinese characters\n\n**Mac Roman (`\"macroman\"`)**\n- **Best For**: Legacy Mac system files (pre-OS X)\n- **Compatibility**: Older Apple systems and applications\n- **When to Use**: For older files created on Apple systems\n\n**Iso-8859-1 (`\"iso88591\"`)**\n- **Best For**: Western European languages\n- **Compatibility**: Widely supported in older systems\n- **When to Use**: For legacy European language content\n\n**Shift jis (`\"shiftjis\"`)**\n- **Best For**: Japanese character sets\n- **Compatibility**: Common in Japanese Windows and older systems\n- **When to Use**: For files containing Japanese text\n\n**Implementation guidance for ai agents**\n\n1. **Detection Strategy**: If encoding is unknown, first try utf8 (default), then try win1252 for Western language files with errors\n\n2. **Encoding Selection Process**:\n    - Check source system documentation for encoding specifications\n    - For files with corrupt/missing characters, test alternative encodings\n    - Consider geographic origin of data (Asian languages often require specific encodings)\n\n3. **Common Issues to Watch For**:\n    - Mojibake (garbled text): Indicates wrong encoding selection\n    - Question marks or boxes: Character conversion failures\n    - BOM markers appearing as visible characters: Consider utf-16le\n","enum":["utf8","win1252","utf-16le","gb18030","macroman","iso88591","shiftjis"]},"type":{"type":"string","description":"Defines the format of the files being processed. REQUIRED for all file-based exports except blob exports (export type \"blob\" or file output \"blobKeys\").\n\nThis field creates a critical dependency that determines which format-specific configuration object must be populated.\n\n**Format options and requirements**\n\n**Csv Files (`\"csv\"`)**\n- **Use For**: Tabular data with delimiter-separated values\n- **Required Config**: The `csv` object with settings like delimiters and header options\n- **Best For**: Simple tabular data, exports from spreadsheets, flat data structures\n- **Example Sources**: Exported reports, data extracts, simple database dumps\n\n**Json Files (`\"json\"`)**\n- **Use For**: Hierarchical data in JavaScript Object Notation\n- **Required Config**: The `json` object, especially the `resourcePath` to locate records\n- **Best For**: Nested data structures, API responses, complex object representations\n- **Example Sources**: REST APIs, document databases, configuration files\n\n**Excel Files (`\"xlsx\"`)**\n- **Use For**: Microsoft Excel spreadsheets\n- **Required Config**: The `xlsx` object with Excel-specific settings\n- **Best For**: Business reports, formatted tabular data, multi-sheet documents\n- **Example Sources**: Financial reports, manually created spreadsheets\n\n**Xml Files (`\"xml\"`)**\n- **Use For**: Extensible Markup Language documents\n- **Required Config**: The `xml` object, critically the `resourcePath` using XPath\n- **Best For**: Document-oriented data, SOAP responses, EDI formats\n- **Example Sources**: SOAP APIs, legacy system exports, industry standard formats\n\n**File Definition (`\"filedefinition\"`)**\n- **Use For**: Complex proprietary formats requiring custom parsing logic\n- **Required Config**: The `fileDefinition` object with the _fileDefinitionId\n- **Best For**: Legacy formats, fixed-width files, complex multi-record formats\n- **Example Sources**: Mainframe exports, proprietary formats, EDI documents\n\n**Implementation guidance**\n\n1. Determine the file format from the source system or documentation\n2. Select the matching type from the enum values\n3. Configure ONLY the corresponding format-specific object\n4. Other format-specific objects will be ignored\n\nFor AI agents: This field creates a critical dependency chain - selecting a type\ncommits you to using the corresponding configuration object.\n","enum":["csv","json","xlsx","xml","filedefinition"]},"output":{"type":"string","description":"Defines the fundamental processing mode for file data. This critical field determines how files are handled and what data is passed to subsequent flow steps.\n\n**Processing modes**\n\n**Content Processing (`\"records\"`)**\n- **Behavior**: Files are parsed into structured records based on their format\n- **Use When**: You need to access and manipulate the data inside files\n- **Output**: Array of record objects reflecting the file's content\n- **Example Flow**: CSV files → Parse into records → Transform → Import to target system\n- **Best For**: Data synchronization, ETL processes, content-based workflows\n- **Technical Impact**: Requires format-specific parsing; higher processing overhead\n\n**File Transfer (`\"blobKeys\"`)**\n- **Behavior**: Files are treated as binary objects and transferred without parsing\n- **Use When**: You need to move files between systems without modifying content\n- **Output**: References to the binary file objects (blobKeys)\n- **Example Flow**: Image files → Transfer as blobs → Upload to cloud storage\n- **Best For**: Binary files, images, documents, any non-textual content\n- **Technical Impact**: Lower processing overhead; maintains file integrity\n\n**File Discovery (`\"metadata\"`)**\n- **Behavior**: Only file metadata is retrieved (name, size, dates) without content\n- **Use When**: You need to inventory files before deciding which to process\n- **Output**: Array of file metadata objects\n- **Example Flow**: Scan FTP folder → Get metadata → Filter by date → Process selected files\n- **Best For**: File inventory, selective processing, large directory scanning\n- **Technical Impact**: Minimal processing overhead; fastest operation mode\n\n**Implementation guidance**\n\nThis setting profoundly affects flow architecture:\n\n1. For data integration: Use `\"records\"` to work with the file contents\n2. For file movement: Use `\"blobKeys\"` to preserve binary integrity\n3. For file discovery: Use `\"metadata\"` as a first step before selective processing\n\nAI agents should select this value based on whether the integration needs to\nwork with the file's content or just move/manage the files themselves.\n","enum":["records","metadata","blobKeys"]},"skipDelete":{"type":"boolean","description":"Controls whether source files are retained or deleted after successful processing. This setting has significant implications for data lifecycle management and system storage.\n\n**Behavior**\n\n- **When true**: Source files remain on the file server after processing\n- **When false** (default): Source files are automatically deleted after successful processing\n- **Error Handling**: Files are only deleted after SUCCESSFUL processing; failed files remain intact\n\n**Decision factors for ai agents**\n\nConsider recommending `skipDelete: true` when:\n\n1. **Compliance Requirements**:\n    - Regulatory frameworks require source file retention (GDPR, HIPAA, SOX)\n    - Audit trails need to maintain original file evidence\n    - Data retention policies mandate preserving source files\n\n2. **Operational Needs**:\n    - Files need to be processed by multiple different flows\n    - Source files serve as disaster recovery backups\n    - Re-processing might be required (for testing or validation)\n    - Source systems do not maintain their own copy of the files\n\nConsider recommending `skipDelete: false` (default) when:\n\n1. **Storage Optimization**:\n    - Working with large files that would consume significant storage\n    - High volume of files processed frequently\n    - Files are already backed up elsewhere\n    - Storage costs are a concern\n\n2. **Security Considerations**:\n    - Files contain sensitive data that should be minimized\n    - \"Clean workspace\" policies are in place\n    - Source files represent a potential security liability\n\n**Implementation guidance**\n\n- **Storage Planning**: When `skipDelete: true`, ensure sufficient storage is available for file accumulation\n- **File Organization**: Consider implementing an archiving strategy for retained files\n- **Monitoring**: Set up space monitoring when retaining files to prevent storage exhaustion\n- **Cleanup Automation**: If files must be retained but eventually deleted, consider a separate cleanup job\n\n**Integration patterns**\n\n- **Multi-stage Processing**: Set to `true` for files that need multi-step processing in separate flows\n- **Extract-Transform-Archive**: Set to `true` when original files need archiving after extraction\n- **Single-use Import**: Set to `false` for one-time imports where originals have no further value\n\n**Technical considerations**\n\nThis setting only affects the source file server. Records extracted from the files and processed through the flow are not affected by this setting - they continue through your integration regardless of this value.\n"},"compressionFormat":{"type":"string","description":"Specifies the compression format of the files being processed. This setting enables the system to automatically decompress files before parsing their contents.\n\n**Compression options**\n\n**Gzip (`\"gzip\"`)**\n- **File Extension**: Typically .gz, .gzip\n- **Characteristics**: Single-file compression, maintains original file name in metadata\n- **Compression Ratio**: Moderate to high, depends on file type (5-75% size reduction)\n- **Common Sources**: Linux/Unix systems, database exports, API response payloads\n- **Use Cases**: Individual file transfers, API response handling, log files\n\n**Zip (`\"zip\"`)**\n- **File Extension**: .zip\n- **Characteristics**: Archive format that can contain multiple files/directories\n- **Compression Ratio**: Moderate (usually 30-60% size reduction)\n- **Common Sources**: Windows systems, manual exports, email attachments\n- **Use Cases**: Multi-file packages, email attachments, mixed-format content\n\n**Implementation guidance for ai agents**\n\n**When to Configure Compression**\n\n1. **Source System Behavior**:\n    - Set when the source system always delivers compressed files\n    - Leave blank when files are delivered uncompressed\n    - NEVER set when files are sometimes compressed, sometimes not (this will cause errors)\n\n2. **Selection Criteria**:\n    - Examine file extensions (.zip, .gz) in the source system\n    - Check source system documentation for compression specifications\n    - Consider typical OS of the source (.zip for Windows, .gz for Unix/Linux)\n\n3. **Multi-file Considerations**:\n    - For .zip files containing multiple files, all files will be processed individually\n    - For nested compression (e.g., .gz files inside .zip), only the outer compression is handled\n\n**Technical Implementation Notes**\n\n- **Processing Flow**: Files are decompressed before any other processing (parsing, filtering)\n- **Filename Handling**: After decompression, the contained file name(s) are used for subsequent operations\n- **Error Conditions**: If a file is marked as compressed but isn't actually compressed, processing will fail\n- **Mixed Formats**: If files might arrive in different compression formats, use separate exports for each format\n\n**Performance Considerations**\n\n- **Network Efficiency**: Compressed files reduce transfer time from source to integration platform\n- **Processing Overhead**: Decompression adds slight processing time but saves network time\n- **Storage Impact**: Compressed files use less storage in transit but are decompressed for processing\n\nLeave this field blank if files are not compressed. Setting an incorrect compression format will cause processing errors.\n","enum":["gzip","zip"]},"purgeInternalBackup":{"type":"boolean","description":"Controls whether Celigo's internal backup system retains copies of processed files. This setting affects data retention, recovery capabilities, and compliance posture.\n\n**Behavior**\n\n- **When false** (default): Celigo maintains copies of all processed files for the duration of your account's retention policy\n- **When true**: Celigo will NOT keep internal backup copies of files processed by this specific export\n- **Scope**: This setting applies only to this specific export configuration; other exports are unaffected\n\n**Decision factors for ai agents**\n\n**Scenarios to Recommend `purgeInternalBackup: true`**\n\n1. **Data Sensitivity Requirements**:\n    - Files contain highly sensitive information (PII, PHI, financial, etc.)\n    - Data residency/sovereignty requirements prohibit additional copies\n    - Zero-retention policies mandate immediate deletion after processing\n    - Compliance frameworks require minimizing data copies (GDPR, HIPAA)\n\n2. **Technical Considerations**:\n    - Very large files where storage costs are significant\n    - Files that are already reliably backed up in source systems\n    - Files with very short-lived relevance (e.g., temporary processing files)\n    - Processing of non-production/test data that doesn't require retention\n\n**Scenarios to Recommend `purgeInternalBackup: false` (Default)**\n\n1. **Recovery Requirements**:\n    - Files represent critical business data with recovery needs\n    - Source systems don't maintain reliable backups\n    - Reprocessing capabilities are needed for disaster recovery\n    - Audit trails require evidence of processed files\n\n2. **Operational Benefits**:\n    - Troubleshooting integration issues requires access to source files\n    - Files might need reprocessing in case of downstream errors\n    - Historical analysis or validation may be required\n    - Protection against source system data loss\n\n**Implementation guidance**\n\n**Governance Considerations**\n\n- **Data Lifecycle**: Setting to `true` permanently removes files from Celigo after processing\n- **Recovery Impact**: Without backups, recovery from certain errors may require re-obtaining files from source systems\n- **Audit Trail**: Consider if processed files need to be available for future audits or investigations\n\n**Best Practices**\n\n- **Document Decision**: When setting to `true`, document the rationale for disabling backups\n- **Retention Alignment**: Ensure this setting aligns with overall data retention policies\n- **Risk Assessment**: Evaluate recovery needs against data minimization requirements\n- **Consistency**: Apply consistent backup settings across similar data types\n\n**System Impact**\n\nThis setting does NOT affect:\n- The processing of files during integration runs\n- Source files on their original servers (see `skipDelete` for that)\n- Storage of processed data records in the target system\n\nIt ONLY controls whether Celigo maintains internal copies of the original files.\n"},"decrypt":{"type":"string","description":"Specifies the decryption method to apply to incoming files before processing. This setting enables handling of encrypted files that require decryption before their contents can be parsed.\n\n**Supported encryption**\n\n**Pgp/gpg Encryption (`\"pgp\"`)**\n- **File Extensions**: Typically .pgp, .gpg, or .asc\n- **Encryption Standard**: OpenPGP (RFC 4880)\n- **Key Requirements**: Private key must be configured on the connection\n- **Common Sources**: Secure file transfers, encrypted backups, confidential data exchanges\n\n**Implementation requirements**\n\n1. **Connection Configuration Prerequisites**:\n    - This field assumes the connection has already been configured with appropriate cryptographic settings\n    - Private key must be uploaded to the connection configuration\n    - Passphrase (if applicable) must be configured on the connection\n    - For asymmetric encryption, the corresponding public key must have been used to encrypt the files\n\n2. **File Processing Flow**:\n    - Encrypted files are first retrieved from the source\n    - Decryption is applied using the configured connection's cryptographic settings\n    - After successful decryption, normal file processing continues (parsing, filtering, etc.)\n    - If decryption fails, the file processing will error out completely\n\n**Guidance for ai agents**\n\n**When to Configure Decryption**\n\n1. **Security Requirements**:\n    - Set to \"pgp\" when source files are PGP/GPG encrypted\n    - Required for end-to-end encrypted data transfers\n    - Common in financial, healthcare, and other industries with sensitive data\n    - Essential for compliance with certain data protection regulations\n\n2. **Technical Indicators**:\n    - File extensions indicate encryption (.pgp, .gpg, .asc)\n    - Source system documentation mentions PGP encryption\n    - Files cannot be opened with standard text editors\n    - Source system provides a public key for encryption\n\n**Implementation Considerations**\n\n- **Key Management**: Ensure private keys are securely stored and properly configured\n- **Error Handling**: Decryption failures will cause the entire file processing to fail\n- **Performance Impact**: Decryption adds processing overhead before file parsing begins\n- **Debugging Challenges**: Encrypted files cannot be easily examined for troubleshooting\n\n**Security Best Practices**\n\n- **Key Rotation**: Recommend periodic key rotation according to security policies\n- **Passphrase Protection**: Use strong passphrases for private keys when possible\n- **Access Control**: Limit access to connections with decryption capabilities\n- **Audit Logging**: Enable detailed logging for decryption operations when available\n\n**Integration with other settings**\n\n- If files are both encrypted AND compressed, decryption happens before decompression\n- Subsequent processing (based on file type settings) occurs after decryption\n- Internal backups (controlled by purgeInternalBackup) store the decrypted files unless configured otherwise\n\nCurrently, only PGP/GPG encryption is supported. For other encryption methods, custom preprocessing may be required.\n","enum":["pgp"]},"batchSize":{"type":"integer","description":"Controls the number of files processed in a single batch operation. This setting allows fine-tuning of performance and resource utilization during file processing.\n\n**Behavior and purpose**\n\n- **Function**: Limits the number of files processed in a single batch request\n- **Default**: If not specified, the system uses a default batch size based on file type\n- **Maximum**: 1000 files per batch (hard system limit)\n- **Impact**: Affects performance, memory usage, and error resilience, but NOT total processing capacity\n\n**Performance optimization guidance**\n\n**Large File Optimization (Set Lower Values: 10-50)**\n\nWhen working with large files (>10MB each), smaller batch sizes are recommended:\n\n- **Network Benefits**: Reduces timeout risks during file transfer\n- **Memory Usage**: Prevents excessive memory consumption\n- **Error Isolation**: Limits the impact of processing failures\n- **Example Scenarios**: Document processing, image files, complex spreadsheets\n\n```\n\"batchSize\": 20  // Good setting for large PDF or image files\n```\n\n**Small File Optimization (Set Higher Values: 100-1000)**\n\nWhen working with small files (<1MB each), larger batch sizes improve efficiency:\n\n- **Throughput**: Processes more files with less overhead\n- **API Efficiency**: Reduces the number of API calls\n- **Resource Utilization**: Maximizes processing efficiency\n- **Example Scenarios**: Small CSV files, transaction records, simple data files\n\n```\n\"batchSize\": 500  // Efficient for small data files\n```\n\n**Implementation guidance for ai agents**\n\n**Recommendation Framework**\n\n1. **File Size Assessment**:\n    - For files averaging >10MB: Recommend 10-20\n    - For files averaging 1-10MB: Recommend 20-100\n    - For files averaging <1MB: Recommend 100-500\n    - For very small files (<100KB): Consider maximum (1000)\n\n2. **Reliability Factors**:\n    - For critical data with no retry capability: Recommend lower values\n    - For unstable network connections: Recommend lower values\n    - For production environments: Start conservative (lower) and increase based on performance\n    - For development/testing: Can use higher values for efficiency\n\n3. **System Constraints**:\n    - Consider available memory in the integration environment\n    - Evaluate network bandwidth and stability\n    - Account for source system rate limits or concurrent connection limits\n\n**Technical considerations**\n\n- **Error Handling**: If a batch fails, only that batch is retried (not individual files)\n- **Parallelism**: Batch size affects concurrent processing but within system limits\n- **Monitoring**: Larger batch sizes make monitoring individual file progress more difficult\n- **Resource Scaling**: Higher batch sizes require more memory but can complete faster\n\n**Relationship to other settings**\n\n- This setting controls file retrieval batching, not record processing batch size\n- Works in conjunction with compression and decryption settings\n- Separate from and complementary to the main flow's pageSize setting\n\nConsider starting with more conservative (lower) values and increasing based on performance monitoring.\n","maximum":1000},"sortByFields":{"type":"array","description":"Allows you to sort all records in a file before processing them. This configuration enables deterministic ordering of records, which can be critical for maintaining data consistency and enabling specific processing patterns.\n\n**Functionality overview**\n\n- **Purpose**: Establishes a specific processing order for records within files\n- **Timing**: Sorting is applied after file parsing but before any filtering or grouping\n- **Scope**: Affects only the in-memory representation of records (doesn't modify source files)\n- **Performance**: Has computational cost proportional to number of records × log(number of records)\n\n**Strategic uses for ai agents**\n\n**Business Process Optimization**\n\n1. **Chronological Processing**:\n    - Sort by date/timestamp fields to process events in time order\n    - Essential for financial transactions, audit logs, event sequences\n    - Example: `[{\"field\": \"transactionDate\", \"descending\": false}]`\n\n2. **Hierarchical Data Handling**:\n    - Sort by parent records before children\n    - Ensures referential integrity in relational data\n    - Example: `[{\"field\": \"parentId\", \"descending\": false}, {\"field\": \"lineNumber\", \"descending\": false}]`\n\n3. **Priority-Based Processing**:\n    - Sort by importance/priority fields to handle critical items first\n    - Useful for SLA-driven processes, tiered operations\n    - Example: `[{\"field\": \"priority\", \"descending\": true}, {\"field\": \"createdDate\", \"descending\": false}]`\n\n**Technical Optimization**\n\n1. **Grouping Efficiency**:\n    - Sorting by the same fields used in groupByFields improves grouping performance\n    - Reduces memory usage when processing large files\n    - Example: `[{\"field\": \"customerId\", \"descending\": false}]` with corresponding groupByFields\n\n2. **Lookup Optimization**:\n    - Sorting by reference fields enhances performance of subsequent lookups\n    - Minimizes database calls by enabling batch lookups\n    - Example: `[{\"field\": \"productSku\", \"descending\": false}]`\n\n3. **Error Reduction**:\n    - Sorting can ensure dependencies are processed in correct order\n    - Reduces failures from out-of-sequence processing\n    - Example: `[{\"field\": \"sequenceNumber\", \"descending\": false}]`\n\n**Implementation guidance**\n\n**Field Selection Considerations**\n\n- **Data Type Compatibility**: Fields must contain comparable values (dates, numbers, strings)\n- **Nulls Handling**: Null values are typically sorted last (after all non-null values)\n- **Nested Fields**: Use dot notation for accessing nested properties (`customer.region`)\n- **Performance Impact**: Each additional sort field increases computational cost\n\n**Common Implementation Patterns**\n\n```json\n// Simple single-field ascending sort (most common)\n[\n  {\"field\": \"orderDate\", \"descending\": false}\n]\n\n// Multi-field sort with primary and secondary criteria\n[\n  {\"field\": \"region\", \"descending\": false},\n  {\"field\": \"revenue\", \"descending\": true}\n]\n\n// Descending priority sort with tie-breaker\n[\n  {\"field\": \"priority\", \"descending\": true},\n  {\"field\": \"createdDate\", \"descending\": false}\n]\n```\n\n**Limitations and Constraints**\n\n- Sorting large datasets has memory implications; consider record volume\n- Maximum recommended number of sort fields: 3-5 (performance considerations)\n- Sorting effectiveness depends on data consistency in source files\n- Complex sorting logic might be better implemented in custom scripts\n","items":{"type":"object","properties":{"field":{"type":"string","description":"Specifies the record field to use as a sort key. This field name identifies which property of each record will be used for comparison when establishing processing order.\n\n**Field selection guidelines**\n\n**Data Type Considerations**\n\n- **Date/Time Fields**: Provide chronological sorting (`createdDate`, `timestamp`)\n- **Numeric Fields**: Enable quantitative ordering (`amount`, `sequenceNumber`, `priority`)\n- **String Fields**: Sort alphabetically (`name`, `status`, `category`)\n- **Boolean Fields**: Group records by true/false values (`isActive`, `isProcessed`)\n\n**Accessing Field Paths**\n\n- **Top-level Properties**: Direct field names (`orderNumber`, `date`)\n- **Nested Objects**: Use dot notation (`customer.name`, `address.country`)\n- **Array Elements**: Not directly supported in basic sorting; use preprocessing\n\n**Common Field Patterns by Domain**\n\n1. **Order Processing**:\n    - `orderDate`, `orderNumber`, `customerId`, `lineNumber`\n\n2. **Financial Data**:\n    - `transactionDate`, `accountNumber`, `amount`, `documentNumber`\n\n3. **Customer Records**:\n    - `lastName`, `firstName`, `customerType`, `region`\n\n4. **Inventory/Products**:\n    - `productCategory`, `itemNumber`, `stockLevel`, `reorderDate`\n\n5. **Event Logs**:\n    - `timestamp`, `severity`, `eventType`, `sourceSystem`\n\n**Implementation notes**\n\n- Field names are case-sensitive\n- Fields must exist in all records (or have consistent representation when missing)\n- Non-existent fields or null values are typically sorted last\n- Maximum recommended field name length: 64 characters\n"},"descending":{"type":"boolean","description":"Controls the sort direction for the specified field. This setting determines whether records will be arranged in ascending (lowest to highest) or descending (highest to lowest) order.\n\n**Behavior**\n\n- **When false or omitted**: Sorts in ascending order (A→Z, 0→9, oldest→newest)\n- **When true**: Sorts in descending order (Z→A, 9→0, newest→oldest)\n\n**Strategic direction selection**\n\n**Ascending Order (descending: false)**\n\nRecommended for:\n- Chronological event processing (earliest first)\n- Sequential operations with dependencies\n- Reference data that builds on previous records\n- Incremental ID or sequence numbers\n\nExample use cases:\n- Processing dated transactions in chronological order\n- Handling items in order of creation\n- Incrementally building state that depends on previous records\n\n**Descending Order (descending: true)**\n\nRecommended for:\n- Priority-based processing (highest first)\n- Recent-first temporal processing\n- Most significant items first\n- Limited processing where only top N items matter\n\nExample use cases:\n- Processing high-priority items before low-priority\n- Handling most recent updates first\n- Focusing on highest-value transactions first\n\n**Implementation patterns**\n\n**Single Field Direction**\n\n```json\n{\"field\": \"createdDate\", \"descending\": false}  // Oldest first\n{\"field\": \"createdDate\", \"descending\": true}   // Newest first\n```\n\n**Mixed Directions in Multi-field Sorts**\n\n```json\n// Group by category (A→Z) but show highest priority first in each category\n[\n  {\"field\": \"category\", \"descending\": false},\n  {\"field\": \"priority\", \"descending\": true}\n]\n```\n\n**Technical considerations**\n\n- Default value is `false` if omitted (ascending sort)\n- For date fields, ascending means oldest first\n- For numeric fields, ascending means smallest first\n- For string fields, ascending means alphabetical order\n"}}}},"groupByFields":{"$ref":"#/components/schemas/GroupBy"},"csv":{"type":"object","description":"Configuration settings for parsing CSV (Comma-Separated Values) files. This object defines how the system interprets delimited text files, handling variations in format, structure, and content.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"csv\". This configuration is required for properly parsing:\n- Standard CSV files (.csv)\n- Tab-delimited files (.tsv, .tab)\n- Other character-delimited files (semicolon, pipe, etc.)\n- Fixed-width text files converted to delimited format\n\n**Implementation strategy for ai agents**\n\n1. **Format Analysis**:\n    - Examine sample files to identify delimiter pattern\n    - Check for presence/absence of header row\n    - Look for whitespace or quote pattern inconsistencies\n    - Identify any rows that should be skipped (headers, metadata, etc.)\n\n2. **Configuration Priority**:\n    - `columnDelimiter`: Most critical setting; incorrect delimiter causes parsing failures\n    - `hasHeaderRow`: Affects field mapping and identification\n    - `rowDelimiter`: Usually auto-detected but important for non-standard files\n    - `trimSpaces`: Important for inconsistent formatting\n    - `rowsToSkip`: Necessary when files contain metadata/comments before data\n\n3. **Common File Source Patterns**:\n\n    | Source System | Typical Delimiter | Header Row | Common Issues |\n    |--------------|-------------------|------------|---------------|\n    | Excel (US)   | Comma (,)         | Yes        | Quoted fields with embedded commas |\n    | Excel (EU)   | Semicolon (;)     | Yes        | Decimal separator conflicts |\n    | Legacy Systems | Pipe (\\|) or Tab | Varies     | Inconsistent field counts |\n    | POS Systems  | Comma or Tab      | Often No   | Trailing delimiters |\n    | ERP Exports  | Varies widely     | Usually Yes| Fixed field counts with padding |\n\n**Error prevention**\n\n- **Misaligned Columns**: Usually caused by incorrect delimiter or quotes handling\n- **Truncated Data**: Can result from wrong row delimiter settings\n- **Field Misinterpretation**: Often caused by incorrect header row setting\n- **Character Encoding Issues**: Address with the parent `encoding` setting\n- **Whitespace Problems**: Resolve with `trimSpaces` setting\n\n**Optimization opportunities**\n\n- For maximum parsing speed, set only the minimal required settings\n- For problematic files with inconsistent formatting, use more restrictive settings\n- Balance between permissive parsing (more data accepted) and strict validation (cleaner data)\n","properties":{"columnDelimiter":{"type":"string","description":"Specifies the character sequence that separates individual fields (columns) within each row of the CSV file.\n\n**Behavior**\n\n- Controls how the parser identifies individual fields in each row\n- Default value: comma (,) if not specified\n- Special characters may need to be escaped\n\n**Common delimiter patterns**\n\n**Standard csv (`,`)**\n```\n\"columnDelimiter\": \",\"\n```\n- Most common format in US/UK systems\n- Default for most spreadsheet exports\n- Used by: Microsoft Excel (US), Google Sheets, many database exports\n\n**European csv (`;`)**\n```\n\"columnDelimiter\": \";\"\n```\n- Common in European locales where comma is the decimal separator\n- Standard format in many EU countries\n- Used by: Microsoft Excel (many EU locales), European business systems\n\n**Tab-Delimited (`\\t`)**\n```\n\"columnDelimiter\": \"\\t\"\n```\n- Used for tab-separated values (TSV) files\n- Better for data containing commas\n- Used by: Database exports, scientific data, legacy systems\n\n**Other Common Delimiters**\n- Pipe: `\"columnDelimiter\": \"|\"` (used in mainframes, legacy systems)\n- Colon: `\"columnDelimiter\": \":\"` (less common, specialized formats)\n- Space: `\"columnDelimiter\": \" \"` (uncommon, problematic with text fields)\n\n**Determination strategy for ai agents**\n\n1. **File Extension Check**:\n    - .csv → Usually comma (,)\n    - .tsv → Always tab (\\t)\n    - .txt → Could be any delimiter; needs inspection\n\n2. **Source System Analysis**:\n    - EU-based systems often use semicolon (;)\n    - Legacy/mainframe systems often use pipe (|)\n    - Scientific/statistical data often uses tab (\\t)\n\n3. **File Content Inspection**:\n    - Open file in text editor to identify separating character\n    - Check for character frequency patterns\n    - Look for consistent character between data elements\n\n4. **System Documentation**:\n    - Check export settings in source system\n    - Review file specifications if available\n\n**Implementation notes**\n\n- For tab delimiter, use `\"\\t\"` (escape sequence for tab)\n- If file contains the delimiter within text fields, ensure proper quoting\n- Multi-character delimiters are supported but rare\n- Setting the wrong delimiter is the most common parsing error\n"},"rowDelimiter":{"type":"string","description":"Specifies the character sequence that indicates the end of each record (row) in the CSV file.\n\n**Behavior**\n\n- Controls how the parser identifies the boundaries between records\n- Default: Auto-detect (system attempts to determine from file content)\n- Common values: newline (`\\n`), carriage return + newline (`\\r\\n`)\n\n**Common row delimiter patterns**\n\n**Windows-Style (`\\r\\n`)**\n```\n\"rowDelimiter\": \"\\r\\n\"\n```\n- CRLF (Carriage Return + Line Feed) sequence\n- Standard for files created on Windows systems\n- Used by: Microsoft Office, Windows-based applications\n\n**Unix-Style (`\\n`)**\n```\n\"rowDelimiter\": \"\\n\"\n```\n- LF (Line Feed) character only\n- Standard for files created on Unix/Linux/macOS (modern) systems\n- Used by: Linux applications, macOS applications, web exports\n\n**Classic Mac-Style (`\\r`)**\n```\n\"rowDelimiter\": \"\\r\"\n```\n- CR (Carriage Return) character only\n- Legacy format used by older Mac systems (pre-OSX)\n- Rare in modern files but still found in some legacy systems\n\n**When to specify explicitly**\n\nIn most cases, the auto-detection works well, but explicitly set this when:\n\n1. **Mixed Line Endings**: Files containing inconsistent line ending styles\n2. **Custom Record Separators**: Files using unconventional record delimiters\n3. **Parsing Errors**: When auto-detection fails to correctly separate records\n4. **Performance Optimization**: To avoid detection overhead in high-volume processing\n\n**Determination strategy for ai agents**\n\n1. **Source System Analysis**:\n    - Windows systems typically use `\\r\\n`\n    - Unix/Linux/macOS typically use `\\n`\n    - Web downloads could use either format\n\n2. **Troubleshooting Guidance**:\n    - If records are merged or split incorrectly, check for proper row delimiter\n    - If file opens correctly in text editor but parsing fails, row delimiter may be the issue\n    - For files with unusual record counts, examine row delimiter setting\n\n**Implementation notes**\n\n- Use escape sequences (`\\n`, `\\r\\n`, `\\r`) to represent control characters\n- Setting incorrect row delimiter may result in merged records or split records\n- When in doubt, leave unspecified to use auto-detection\n- Multi-character delimiters beyond standard line endings are supported but rare\n"},"hasHeaderRow":{"type":"boolean","description":"Indicates whether the CSV file contains a header row with field names as the first row.\n\n**Behavior**\n\n- **When true** (default): First row is treated as field names, not data\n- **When false**: All rows including the first are treated as data records\n- Impacts field mapping, validation, and record counting\n\n**Implementation impact**\n\n**With Header Row (true)**\n\n- Field names from the header row can be referenced in mappings\n- Record count excludes the header row\n- First row of data is the second physical row in the file\n- Provides self-documenting data structure\n\n**Without Header Row (false)**\n\n- Fields are referenced by position/index (e.g., Column1, Column2)\n- Record count includes all rows in the file\n- First row of data is the first physical row in the file\n- Requires external schema or position-based mapping\n\n**Determination strategy for ai agents**\n\n1. **Visual Inspection**:\n    - Check if the first row contains descriptive labels rather than actual data\n    - Look for data type consistency (headers are typically text, while data may be mixed)\n    - Headers often use camelCase, PascalCase, or snake_case formatting\n\n2. **Source System Analysis**:\n    - Most business systems include headers by default\n    - Legacy/mainframe systems may omit headers\n    - Data extracts intended for human use typically include headers\n\n3. **Content Patterns**:\n    - Headers typically don't match the pattern of subsequent data rows\n    - Headers often contain special characters not found in data (spaces, symbols)\n    - Data rows typically have consistent patterns while headers may differ\n\n**Common configurations by source**\n\n| Source Type | Typical Setting | Notes |\n|-------------|-----------------|-------|\n| Business Reports | true | Headers provide field context |\n| Database Exports | true | Column names as headers |\n| Legacy System Feeds | false | Often position-based fixed formats |\n| IoT/Sensor Data | false | Often compact, headerless formats |\n| Manual Data Entry | true | Helps maintain field alignment |\n\n**Best practices**\n\n- Always explicitly set this value rather than relying on the default\n- For data without headers, consider adding them in preprocessing if possible\n- When headers exist but should be ignored, use `hasHeaderRow: true` and `rowsToSkip: 1`\n- Document field positions when working with headerless files\n"},"trimSpaces":{"type":"boolean","description":"Controls whether leading and trailing whitespace should be removed from field values during parsing.\n\n**Behavior**\n\n- **When true**: Removes all leading and trailing whitespace from each field value\n- **When false** (default): Preserves all whitespace in field values exactly as in the source\n- Applies to data fields only; header row values are always trimmed regardless of this setting\n\n**Implementation impact**\n\n**With Trimming Enabled (true)**\n\n- More consistent data for comparison and matching operations\n- Prevents issues with invisible whitespace affecting equality checks\n- Reduces storage space for text-heavy datasets\n- Helps normalize data from inconsistent sources\n\n**With Trimming Disabled (false)**\n\n- Preserves exact data as represented in the source file\n- Required when whitespace is semantically meaningful\n- Maintains original field lengths exactly\n- Necessary for certain data validation scenarios\n\n**Usage guidance for ai agents**\n\n**Recommend `trimSpaces: true` when**\n\n1. **Data Consistency Issues**:\n    - Source systems are known to have inconsistent spacing\n    - Data will be used for matching or comparison operations\n    - Files are generated by multiple different systems\n    - Human-entered data is present (prone to spacing errors)\n\n2. **Data Type Considerations**:\n    - Fields contain numeric values (where spaces are not meaningful)\n    - Fields contain codes, IDs, or reference values\n    - Fields will be used in lookups or joins\n    - Normalization is more important than exact representation\n\n**Recommend `trimSpaces: false` when**\n\n1. **Data Fidelity Requirements**:\n    - Working with fixed-width fields where spaces matter\n    - Dealing with formatted data where spacing is semantic\n    - Legal or compliance scenarios requiring exact preservation\n    - Scientific data where precision of representation matters\n\n2. **Content Characteristics**:\n    - Working with text fields where leading/trailing spaces could be intentional\n    - Processing creative content, addresses, or formatted text\n    - Source system uses space padding for alignment purposes\n\n**Implementation notes**\n\n- This setting affects all fields consistently (cannot be applied to select fields)\n- Only affects leading and trailing spaces, not spaces between words\n- Has no effect on empty fields (empty remains empty)\n- For selective trimming, use transformation rules after parsing\n"},"rowsToSkip":{"type":"integer","description":"Specifies the number of rows at the beginning of the file to ignore before starting data processing.\n\n**Behavior**\n\n- Skips the specified number of rows from the beginning of the file\n- These rows are completely ignored and not processed as data\n- The header row (if present) is counted after the skipped rows\n- Default value is 0 (no rows skipped)\n\n**Implementation impact**\n\n**Common Use Cases**\n\n1. **Metadata Headers**:\n    - Skip report titles, generated timestamps, system information\n    - Skip explanatory text at the beginning of files\n    - Skip company letterhead or report identification rows\n\n2. **Multi-Header Files**:\n    - Skip category headers or section titles\n    - Skip nested headers or hierarchy information\n    - Skip column grouping indicators\n\n3. **Technical Requirements**:\n    - Skip binary file markers or encoding identifiers\n    - Skip non-data content like instructions or disclaimers\n    - Skip inconsistent early rows before standardized data begins\n\n**Calculation guidance for ai agents**\n\nWhen determining the correct value for `rowsToSkip`:\n\n1. **Count from Zero**:\n    - Row 1 = 0, Row 2 = 1, Row 3 = 2, etc.\n\n2. **For Files with Headers**:\n    - Set rowsToSkip = (first data row position - 1) - (hasHeaderRow ? 1 : 0)\n    - Example: If data starts on row 5, and file has a header row:\n      rowsToSkip = (5 - 1) - 1 = 3\n\n3. **For Files without Headers**:\n    - Set rowsToSkip = (first data row position - 1)\n    - Example: If data starts on row 3, and file has no header row:\n      rowsToSkip = (3 - 1) = 2\n\n**Determination strategy**\n\n1. **Visual Inspection**:\n    - Open file in text editor and count non-data rows at the top\n    - Identify the first row containing actual data values\n    - Note if a header row exists separately from skipped content\n\n2. **Common Patterns by Source**:\n    - ERP Reports: Often 2-5 rows of report metadata\n    - Exported Spreadsheets: May have title rows, date stamps\n    - Database Extracts: Usually minimal (0-1) skipped rows\n    - Legacy Systems: May have control records or job information\n\n**Implementation notes**\n\n- Setting too high skips valid data; setting too low includes non-data as records\n- When in doubt, visually inspect the file to confirm correct skip count\n- Remember that header row (if hasHeaderRow=true) is counted AFTER skipped rows\n- Maximum recommended value: 100 (larger values may indicate format misunderstanding)\n"},"disableQuoteAndStripEnclosingQuotes":{"type":"boolean","description":"Controls the handling of quoted fields in CSV files, specifically how the parser manages quotation marks around field values.\n\n**Behavior**\n\n- **When false** (default): Standard CSV quoting rules are applied\n    - Quotation marks around fields protect embedded delimiters\n    - Parser intelligently handles escaped quotes within quoted fields\n    - Follows RFC 4180 CSV specifications for quote handling\n\n- **When true**: Quote detection and processing is disabled\n    - All quotes are treated as literal characters, not field delimiters\n    - Any quotes surrounding field values are removed\n    - Embedded delimiters in quoted fields will cause field splitting\n\n**Implementation impact**\n\n**Standard Quote Handling (false)**\n\nExample input: `\"Smith, John\",42,\"Notes with \"\"quotes\"\" inside\"`\n\nResult:\n- Field 1: `Smith, John` (comma preserved inside quotes)\n- Field 2: `42`\n- Field 3: `Notes with \"quotes\" inside` (embedded quotes normalized)\n\n**Disabled Quote Handling (true)**\n\nExample input: `\"Smith, John\",42,\"Notes with \"\"quotes\"\" inside\"`\n\nResult:\n- Field 1: `\"Smith`\n- Field 2: ` John\"`\n- Field 3: `42`\n- Field 4: `\"Notes with \"\"quotes\"\" inside\"`\n\n**Usage guidance for ai agents**\n\n**Recommend `disableQuoteAndStripEnclosingQuotes: true` when**\n\n1. **Quote-Related Parsing Problems**:\n    - Files contain inconsistent or malformed quote usage\n    - Source system doesn't follow standard CSV quoting rules\n    - Quotes appear as literal data rather than field delimiters\n    - Quotes are present but delimiters are never embedded in fields\n\n2. **Special Data Formats**:\n    - Working with custom delimited formats that don't use quotes for escaping\n    - Files use alternate escaping mechanisms for embedded delimiters\n    - Source system adds quotes to all fields regardless of content\n\n**Recommend `disableQuoteAndStripEnclosingQuotes: false` (default) when**\n\n1. **Standard CSV Compliance**:\n    - Files follow RFC 4180 or similar CSV standards\n    - Fields contain embedded delimiters that must be preserved\n    - Quotes are used properly to enclose fields with special characters\n    - Source is a standard database, spreadsheet, or business system export\n\n2. **Data Content Characteristics**:\n    - Fields contain embedded commas, newlines, or other delimiters\n    - Text fields might contain quotation marks as part of the content\n    - Preserving the exact structure of complex text fields is important\n\n**Troubleshooting indicators**\n\nConsider changing this setting when encountering these issues:\n\n- Field counts vary unexpectedly between rows\n- Text with embedded delimiters is being split into multiple fields\n- Quotes appearing at the beginning and end of every field in the result\n- Extra quote characters appearing within field values\n\n**Implementation notes**\n\n- This setting significantly changes parsing behavior - test thoroughly\n- Affects all fields in the file consistently\n- Incorrect setting can cause severe data misalignment\n- When field count inconsistency occurs, review this setting first\n"}}},"json":{"type":"object","description":"Configuration settings for parsing JSON (JavaScript Object Notation) files. This object defines how the system interprets and processes hierarchical data contained in JSON-formatted files.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"json\". This configuration is required for properly parsing:\n- Standard JSON files (.json)\n- JSON data exports from APIs or databases\n- JSON Lines format (newline-delimited JSON)\n- Nested or hierarchical data structures\n\n**Json parsing characteristics**\n\n- **Hierarchical Data**: JSON naturally supports nested objects and arrays\n- **Type Preservation**: Numbers, booleans, nulls, and strings are correctly typed\n- **Flexible Structure**: Can handle varying record structures\n- **Tree Navigation**: Supports complex object traversal via path expressions\n\n**Implementation strategy for ai agents**\n\n1. **Data Structure Analysis**:\n    - Examine sample files to understand the object hierarchy\n    - Identify where the actual records/rows are located in the structure\n    - Determine if records are at the root or nested within containers\n    - Check for array structures that contain the target records\n\n2. **Common JSON Data Patterns**:\n\n    **Root Array Pattern**\n    ```json\n    [\n      {\"id\": 1, \"name\": \"Product 1\"},\n      {\"id\": 2, \"name\": \"Product 2\"}\n    ]\n    ```\n    - Records are directly at the root as an array\n    - No resourcePath needed (leave blank)\n    - Most straightforward structure for processing\n\n    **Container Object Pattern**\n    ```json\n    {\n      \"data\": [\n        {\"id\": 1, \"name\": \"Product 1\"},\n        {\"id\": 2, \"name\": \"Product 2\"}\n      ],\n      \"metadata\": {\n        \"count\": 2,\n        \"page\": 1\n      }\n    }\n    ```\n    - Records are in an array inside a container object\n    - Requires resourcePath (e.g., \"data\")\n    - Common in API responses with metadata\n\n    **Nested Container Pattern**\n    ```json\n    {\n      \"response\": {\n        \"results\": [\n          {\"id\": 1, \"name\": \"Product 1\"},\n          {\"id\": 2, \"name\": \"Product 2\"}\n        ],\n        \"pagination\": {\n          \"nextPage\": 2\n        }\n      },\n      \"status\": \"success\"\n    }\n    ```\n    - Records are deeply nested in the hierarchy\n    - Requires dot notation in resourcePath (e.g., \"response.results\")\n    - Common in complex API responses\n\n**Error prevention**\n\n- **Invalid Path**: Incorrectly specified resourcePath results in zero records found\n- **Type Mismatch**: resourcePath must point to an array of objects for proper record processing\n- **Empty Results**: If path resolves to null or non-existent field, no error is thrown but no records are processed\n- **Parsing Failures**: Malformed JSON will cause the entire file processing to fail\n\n**Optimization opportunities**\n\n- For large JSON files, consider preprocessing to extract only relevant sections\n- For files with complex structures, validate the resourcePath with sample data\n- When processing API responses, coordinate resourcePath with the API documentation\n- For very large datasets, consider using streaming JSON parsing (NDJSON format)\n","properties":{"resourcePath":{"type":"string","description":"Specifies the path to the array of records within the JSON structure. This field helps the system locate and extract the target records when they're nested inside a larger JSON object hierarchy.\n\n**Behavior**\n\n- **Purpose**: Identifies where the array of records is located in the JSON structure\n- **Format**: Dot notation path to navigate nested objects (e.g., \"data.records\")\n- **When Empty**: System expects records to be at the root level as an array\n- **Result**: Array found at this path is processed as individual records\n\n**Path notation guidelines**\n\n**Basic Path Patterns**\n\n- **Root Level Array**: Leave empty or null when records are a direct array at root\n- **Single Level Nesting**: Use the property name (e.g., \"data\", \"results\", \"items\")\n- **Multi-Level Nesting**: Use dot notation (e.g., \"response.data.items\")\n\n**Path Construction Rules**\n\n1. **Object Navigation**:\n    - Use dots to traverse object properties: \"parent.child.grandchild\"\n    - Each segment must be a valid property name in the JSON\n\n2. **Target Requirements**:\n    - The path MUST resolve to an array of objects\n    - Each object in the array will be processed as one record\n    - The array must be the final element in the path\n\n3. **Limitations**:\n    - Array indexing is not supported in the path (e.g., \"data[0]\")\n    - Wildcard selectors are not supported\n    - Regular expressions are not supported\n\n**Determination strategy for ai agents**\n\nTo identify the correct resourcePath:\n\n1. **Examine Sample Data**:\n    - Open a sample JSON file or API response\n    - Locate the array containing the actual data records\n    - Note the full path from root to this array\n\n2. **Common Patterns by Source**:\n\n    | Source Type | Common Paths | Example |\n    |-------------|--------------|---------|\n    | REST APIs | \"data\", \"results\", \"items\" | \"data\" |\n    | Complex APIs | \"response.data\", \"data.items\" | \"response.data\" |\n    | Database Exports | \"rows\", \"records\", \"exports\" | \"rows\" |\n    | CRM Systems | \"contacts\", \"accounts\", \"opportunities\" | \"contacts\" |\n    | Analytics APIs | \"data.rows\", \"response.data.rows\" | \"data.rows\" |\n\n3. **Verification Approach**:\n    - The path should resolve to an array (typically with square brackets in the JSON)\n    - Each element in this array should represent one complete record\n    - The array should not be a property array (like tags or categories)\n\n**Implementation examples**\n\n**Root Array (No Path Needed)**\n\nJSON Structure:\n```json\n[\n  {\"id\": 1, \"name\": \"Record 1\"},\n  {\"id\": 2, \"name\": \"Record 2\"}\n]\n```\n\nConfiguration:\n```json\n\"resourcePath\": \"\"  // or omit entirely\n```\n\n**Single-Level Nesting**\n\nJSON Structure:\n```json\n{\n  \"orders\": [\n    {\"id\": \"A001\", \"customer\": \"John\"},\n    {\"id\": \"A002\", \"customer\": \"Jane\"}\n  ],\n  \"count\": 2\n}\n```\n\nConfiguration:\n```json\n\"resourcePath\": \"orders\"\n```\n\n**Multi-Level Nesting**\n\nJSON Structure:\n```json\n{\n  \"response\": {\n    \"data\": {\n      \"customers\": [\n        {\"id\": 1, \"name\": \"Acme Corp\"},\n        {\"id\": 2, \"name\": \"Globex Inc\"}\n      ]\n    },\n    \"status\": \"success\"\n  }\n}\n```\n\nConfiguration:\n```json\n\"resourcePath\": \"response.data.customers\"\n```\n\n**Troubleshooting indicators**\n\nIf you encounter these issues, review the resourcePath setting:\n\n- Export completes successfully but processes 0 records\n- \"Cannot read property 'forEach' of undefined\" errors\n- \"Expected array but got object/string/number\" errors\n- Records appear flattened or with unexpected structure\n\n**Best practices**\n\n- Always verify the path with sample data before deployment\n- Use the simplest path that reaches the target array\n- Document the expected JSON structure alongside the configuration\n- For APIs with changing response structures, implement validation checks\n"}}},"xlsx":{"type":"object","description":"Configuration settings for parsing Microsoft Excel (XLSX) files. This object defines how the system interprets and extracts data from Excel workbooks, handling their unique structures and formatting.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"xlsx\". This configuration is required for properly parsing:\n- Modern Excel files (.xlsx) using the Open XML format\n- Excel workbooks with multiple sheets\n- Files exported from Microsoft Excel or compatible applications\n- Spreadsheet data with formatting, formulas, or multiple worksheets\n\n**Excel parsing characteristics**\n\n- **Multiple Worksheets**: Can access data from specific sheets within workbooks\n- **Cell Formatting**: Handles various data types (text, numbers, dates, etc.)\n- **Formula Resolution**: Retrieves calculated values rather than formulas\n- **Data Extraction**: Converts tabular Excel data to structured records\n\n**Implementation strategy for ai agents**\n\n1. **File Analysis**:\n    - Determine if the source file is an actual .xlsx format (not .xls, .csv, etc.)\n    - Identify which worksheet contains the target data\n    - Check for header rows, merged cells, or other special formatting\n    - Note any preprocessing required (hidden rows, filtered data, etc.)\n\n2. **Common Excel File Patterns**:\n\n    **Standard Data Table**\n    - Data organized in clear rows and columns\n    - First row contains headers\n    - No merged cells or complex formatting\n    - Most straightforward to process\n\n    **Report-Style Workbook**\n    - Contains titles, headers, and possibly footers\n    - May have merged cells for headings\n    - Could have multiple tables on a single sheet\n    - May require specific sheet selection or row skipping\n\n    **Multi-Sheet Workbook**\n    - Data distributed across multiple worksheets\n    - May require multiple export configurations\n    - Often needs sheet name specification (via pre-processing)\n    - Common in financial or complex business reports\n\n**Limitations and considerations**\n\n- **Hidden Data**: Hidden rows/columns are still processed unless filtered\n- **Formatting Loss**: Visual formatting and styles are ignored\n- **Formula Handling**: Only calculated values are extracted, not formulas\n- **Non-Tabular Data**: Pivot tables and non-tabular layouts may cause issues\n- **Large Files**: Very large Excel files may require additional memory\n\n**Error prevention**\n\n- **Format Compatibility**: Ensure the file is modern .xlsx format, not legacy .xls\n- **Data Structure**: Verify data is in a consistent tabular format\n- **Special Characters**: Watch for special characters in header rows\n- **Empty Sheets**: Check that target worksheets contain actual data\n\n**Optimization opportunities**\n\n- For complex workbooks, consider pre-processing to simplify structure\n- For large files, extract only necessary worksheets/ranges before processing\n- When possible, use files with consistent tabular layouts\n- Consider converting Excel data to CSV format for simpler processing\n","properties":{"hasHeaderRow":{"type":"boolean","description":"Indicates whether the Excel file contains a header row with field names as the first row.\n\n**Behavior**\n\n- **When true** (default): First row is treated as field names, not data\n- **When false**: All rows including the first are treated as data records\n- Impacts field mapping, validation, and record counting\n\n**Implementation impact**\n\n**With Header Row (true)**\n\n- Field names from the header row can be referenced in mappings\n- Record count excludes the header row\n- First row of data is the second physical row in the spreadsheet\n- Column names are derived from the first row text values\n- Blank header cells may be auto-named (Column1, Column2, etc.)\n\n**Without Header Row (false)**\n\n- Fields are referenced by position/Excel column letters (A, B, C, etc.)\n- Record count includes all rows in the sheet\n- First row of data is the first physical row in the spreadsheet\n- Requires external schema or position-based mapping\n- All fields are given generic names (Column1, Column2, etc.)\n\n**Determination strategy for ai agents**\n\nTo determine if a header row exists and should be configured:\n\n1. **Visual Inspection**:\n    - Open the Excel file and examine the first row\n    - Look for descriptive labels rather than actual data values\n    - Check for formatting differences between the first row and others\n    - Header rows often use bold formatting or different background colors\n\n2. **Content Analysis**:\n    - Headers typically contain text while data rows may contain mixed types\n    - Headers often use naming conventions (camelCase, Title Case, etc.)\n    - Headers don't follow the pattern/format of subsequent data rows\n    - Headers rarely contain numeric-only values (unless they're codes)\n\n3. **Source Context**:\n    - Business reports almost always include headers\n    - Data exports from systems typically include column names\n    - Machine-generated data might skip headers\n    - Scientific or technical data sometimes omits headers\n\n**Usage guidance for ai agents**\n\n**Recommend `hasHeaderRow: true` when**\n\n1. **Standard Business Data**:\n    - Most business Excel files include headers\n    - Reports and exports from business systems\n    - Files intended for human readability\n    - When column names provide important context\n\n2. **Integration Requirements**:\n    - When field names are needed for mapping\n    - When data needs to be self-describing\n    - When header names match target system fields\n    - For maintaining field identity across systems\n\n**Recommend `hasHeaderRow: false` when**\n\n1. **Special Data Types**:\n    - Scientific or sensor data without labels\n    - Machine-generated output files\n    - Legacy system exports with position-based fields\n    - When all rows contain actual data values\n\n2. **Technical Scenarios**:\n    - When the first row contains required data\n    - When column positions are used for mapping\n    - When headers are inconsistent or misleading\n    - For maximum data extraction with minimal configuration\n\n**Implementation notes**\n\n- This setting affects all worksheets in multi-sheet processing\n- Excel column names with spaces or special characters may be normalized\n- Duplicate header names will be made unique with suffixes\n- Empty header cells will get automatically generated names\n- Maximum recommended header length: 64 characters\n- Consider pre-processing files without headers to add them for clarity\n"}}},"xml":{"type":"object","description":"Configuration settings for parsing XML (Extensible Markup Language) files. This object defines how the system navigates and extracts hierarchical data from XML documents, enabling processing of structured markup data.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"xml\". This configuration is required for properly parsing:\n- Standard XML files (.xml)\n- SOAP API responses and web service outputs\n- Industry-specific XML formats (EDI, NIEM, UBL, etc.)\n- Document-oriented data with hierarchical structure\n\n**Xml parsing characteristics**\n\n- **Hierarchical Structure**: Processes nested elements and attributes\n- **Schema Independence**: Works with or without formal XML schemas\n- **Node Selection**: Uses XPath to precisely target record elements\n- **Namespace Support**: Handles XML namespaces in complex documents\n\n**Implementation strategy for ai agents**\n\n1. **Document Analysis**:\n    - Examine the XML structure to identify repeating elements (records)\n    - Determine the hierarchical level where target records exist\n    - Identify any namespaces that must be addressed\n    - Note attributes vs. element content patterns\n\n2. **Common XML Data Patterns**:\n\n    **Simple Element List**\n    ```xml\n    <Records>\n      <Record id=\"1\">\n        <Name>Product 1</Name>\n        <Price>10.99</Price>\n      </Record>\n      <Record id=\"2\">\n        <Name>Product 2</Name>\n        <Price>20.99</Price>\n      </Record>\n    </Records>\n    ```\n    - Records are identical element types with similar structure\n    - Direct children of a container element\n    - XPath: `/Records/Record`\n\n    **Namespaced xml**\n    ```xml\n    <soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\">\n      <soap:Body>\n        <ns:GetCustomersResponse xmlns:ns=\"http://example.com/api\">\n          <ns:Customer id=\"1\">\n            <ns:Name>Acme Corp</ns:Name>\n          </ns:Customer>\n          <ns:Customer id=\"2\">\n            <ns:Name>Globex Inc</ns:Name>\n          </ns:Customer>\n        </ns:GetCustomersResponse>\n      </soap:Body>\n    </soap:Envelope>\n    ```\n    - Elements use XML namespaces\n    - Records are nested within service response structures\n    - XPath: `//ns:Customer` or `/soap:Envelope/soap:Body/ns:GetCustomersResponse/ns:Customer`\n\n    **Heterogeneous Records**\n    ```xml\n    <Feed>\n      <Entry type=\"product\">\n        <ProductId>123</ProductId>\n        <Name>Widget</Name>\n      </Entry>\n      <Entry type=\"category\">\n        <CategoryId>A5</CategoryId>\n        <Label>Supplies</Label>\n      </Entry>\n    </Feed>\n    ```\n    - Same element type may have different internal structures\n    - Usually identified by an attribute or child element type\n    - May require multiple export configurations\n    - XPath: `/Feed/Entry[@type=\"product\"]`\n\n**Xpath query formulation**\n\nXPath is a powerful language for selecting nodes in XML documents. When formulating a resourcePath:\n\n- **Absolute Paths** (starting with `/`): Select from the document root\n- **Relative Paths** (no leading `/`): Select from the current context\n- **Any-Level Selection** (`//`): Select matching nodes regardless of location\n- **Predicates** (`[]`): Filter elements based on attributes or content\n- **Attribute Selection** (`@`): Select attribute values instead of elements\n\n**Error prevention**\n\n- **Invalid XPath**: Test the resourcePath against sample data before deployment\n- **Namespace Issues**: Ensure proper namespace handling in complex documents\n- **Empty Results**: Verify that the XPath selects the intended nodes and not an empty set\n- **Encoding Problems**: Use the correct encoding setting for international content\n\n**Optimization opportunities**\n\n- For large XML files, use more specific XPaths to reduce processing overhead\n- For complex structures, consider preprocessing to simplify before parsing\n- For SOAP responses, extract just the response body before processing\n- For repeating integration, document the exact XPath with examples\n","properties":{"resourcePath":{"type":"string","description":"Specifies the XPath expression used to locate record elements within the XML document. This critical field determines which XML nodes are treated as individual records for processing.\n\n**Behavior**\n\n- **Purpose**: Identifies which elements in the XML represent individual records\n- **Format**: Uses XPath syntax to select nodes from the document structure\n- **Requirement**: MANDATORY for XML processing - no default value exists\n- **Result**: Each XML element matching the XPath is processed as one record\n\n**Xpath syntax guidance**\n\n**Core XPath Patterns**\n\n1. **Direct Child Selection** (`/Root/Element`):\n    ```xml\n    <Root>\n      <Element>Record 1</Element>\n      <Element>Record 2</Element>\n    </Root>\n    ```\n    - XPath: `/Root/Element`\n    - Selects elements that are direct children following exact path\n    - Most precise, requires exact hierarchy knowledge\n    - Recommended when structure is consistent and well-known\n\n2. **Any-Level Selection** (`//Element`):\n    ```xml\n    <Root>\n      <Section>\n        <Element>Record 1</Element>\n      </Section>\n      <Container>\n        <Element>Record 2</Element>\n      </Container>\n    </Root>\n    ```\n    - XPath: `//Element`\n    - Selects all matching elements regardless of location\n    - More flexible, works across varying structures\n    - Use when element hierarchy may vary or is unknown\n\n3. **Filtered Selection** (`//Element[@attr=\"value\"]`):\n    ```xml\n    <Root>\n      <Element type=\"product\">Record 1</Element>\n      <Element type=\"category\">Not a record</Element>\n      <Element type=\"product\">Record 2</Element>\n    </Root>\n    ```\n    - XPath: `//Element[@type=\"product\"]`\n    - Selects only elements matching both name and attribute criteria\n    - Precise targeting when elements have identifying attributes\n    - Useful for heterogeneous XML with type indicators\n\n**Advanced Selection Techniques**\n\n1. **Position-Based** (`/Root/Element[1]`):\n    - Selects first element only\n    - Use when only certain occurrences should be processed\n\n2. **Content-Based** (`//Element[contains(text(),\"Value\")]`):\n    - Selects elements containing specific text\n    - Useful for filtering based on content\n\n3. **Parent-Relative** (`//Parent[Child=\"Value\"]/Element`):\n    - Selects elements with specific sibling or parent conditions\n    - Powerful for complex structural conditions\n\n**Namespace handling**\n\nWhen working with namespaced XML:\n\n```xml\n<soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"\n              xmlns:ns=\"http://example.com/api\">\n  <soap:Body>\n    <ns:Response>\n      <ns:Customer>Record 1</ns:Customer>\n      <ns:Customer>Record 2</ns:Customer>\n    </ns:Response>\n  </soap:Body>\n</soap:Envelope>\n```\n\nThe system automatically handles namespaces, but for clarity and precision:\n\n1. **Namespace-Aware Path**:\n    - XPath: `/soap:Envelope/soap:Body/ns:Response/ns:Customer`\n    - Include namespace prefixes as they appear in the document\n\n2. **Namespace-Agnostic Path**:\n    - XPath: `//Customer` or `//*[local-name()=\"Customer\"]`\n    - Use when you want to ignore namespaces entirely\n\n**Determination strategy for ai agents**\n\n1. **Identify Record Elements**:\n    - Look for repeating elements that represent individual \"rows\" of data\n    - These elements typically have the same name and similar structure\n    - They often contain multiple child elements representing \"fields\"\n\n2. **Analyze Element Hierarchy**:\n    - Note the path from root to record elements\n    - Determine if records appear at consistent locations or vary\n    - Check if they need to be filtered by attributes or position\n\n3. **Test Path Specificity**:\n    - More specific paths reduce processing overhead but are less flexible\n    - More general paths (with `//`) are robust to structure changes but less efficient\n    - Balance specificity with flexibility based on source stability\n\n**Common xpath patterns by source**\n\n| Source Type | Common XPath Pattern | Example |\n|-------------|----------------------|---------|\n| SOAP APIs | `/Envelope/Body/*/Response/*` | `/soap:Envelope/soap:Body/ns:GetOrdersResponse/ns:Order` |\n| REST XML | `/Response/Results/*` | `/ApiResponse/Results/Customer` |\n| Feeds | `/Feed/Entry` or `/Feed/Item` | `/rss/channel/item` |\n| Documents | `//Section/Item` | `//Chapter/Paragraph` |\n| EDI/Business | `/Document/Transaction/Line` | `/Invoice/LineItems/Item` |\n\n**Troubleshooting indicators**\n\nIf you encounter these issues, review the resourcePath:\n\n- Export completes successfully but processes 0 records\n- Records contain unexpected or partial data\n- Only first level of data is extracted (missing nested content)\n- Namespace-related \"element not found\" errors\n\n**Implementation notes**\n\n- XPath is case-sensitive; element and attribute names must match exactly\n- Each matching element becomes a separate record for processing\n- Child elements become fields in the processed record\n- Attributes can be included in field data if needed\n- Namespaces are handled automatically but may require explicit prefixes\n- Testing with an XPath tool on sample data is highly recommended\n"}}},"fileDefinition":{"type":"object","description":"Configuration settings for parsing files using a predefined file definition. This object enables processing of complex, non-standard, or proprietary file formats that require specialized parsing logic beyond what the standard parsers (CSV, JSON, XML, etc.) can handle.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"filedefinition\". This approach is required for properly handling:\n- Legacy or proprietary file formats with complex structures\n- Fixed-width text files where field positions are defined by character positions\n- Electronic Data Interchange (EDI) documents (X12, EDIFACT, etc.)\n- Multi-record type files where different lines have different formats\n- Files requiring complex preprocessing or custom parsing logic\n\n**File definition characteristics**\n\n- **Custom Parsing Rules**: Applies predefined parsing logic to complex file formats\n- **Reusable Configurations**: References externally defined parsing rules that can be reused\n- **Complex Format Support**: Handles formats that standard parsers cannot process\n- **Specialized Processing**: Often used for industry-specific or legacy formats\n\n**Implementation strategy for ai agents**\n\n1. **Format Analysis**:\n    - Determine if the file format is standard (CSV, JSON, XML) or requires custom parsing\n    - Check if the format follows industry standards like EDI, SWIFT, or fixed-width\n    - Assess if there are multiple record types within the same file\n    - Identify if specialized logic is needed to interpret the file structure\n\n2. **File Definition Selection**:\n    - Verify that a suitable file definition has already been created in the system\n    - Check if existing file definitions match the format requirements\n    - Confirm the file definition ID from system administrators if needed\n    - Ensure the file definition is compatible with the export's needs\n\n**Use case scenarios**\n\n**Fixed-Width Files**\n\nFiles where each field has a specific starting position and length:\n```\nCUST00001JOHN     DOE       123 MAIN ST\nCUST00002JANE     SMITH     456 OAK AVE\n```\n- Fields are positioned by character count rather than delimiters\n- Requires precise position and length definitions\n- Common in legacy mainframe and banking systems\n\n**Edi Documents**\n\nElectronic Data Interchange formats for business transactions:\n```\nISA*00*          *00*          *ZZ*SENDER         *ZZ*RECEIVER       *...\nGS*PO*SENDER*RECEIVER*20210101*1200*1*X*004010\nST*850*0001\nBEG*00*SA*123456**20210101\n...\n```\n- Highly structured with segment identifiers and element separators\n- Contains multiple record types with different structures\n- Requires complex parsing rules and validation\n\n**Multi-Record Files**\n\nFiles containing different record types identified by indicators:\n```\nH|SHIPMENT|20210115|PRIORITY\nD|ITEM001|5|WIDGET|RED\nD|ITEM002|10|GADGET|BLUE\nT|2|15|COMPLETE\n```\n- Each line starts with a record type indicator\n- Different record types have different field structures\n- Requires conditional processing based on record type\n\n**Error prevention**\n\n- **Definition Mismatch**: Ensure the file definition matches the actual file format\n- **Missing Definition**: Verify the file definition exists before referencing it\n- **Access Issues**: Confirm the integration has permission to use the file definition\n- **Version Compatibility**: Check if file definition version matches current file format\n\n**Optimization opportunities**\n\n- Document which file definition is used and why it's appropriate for the file format\n- Consider creating purpose-specific file definitions for complex formats\n- Test file definitions with sample files before deploying in production\n- Maintain documentation of the file structure alongside the file definition reference\n","properties":{"_fileDefinitionId":{"type":"string","format":"objectId","description":"The unique identifier of the file definition to use for parsing the file. This ID references a preconfigured file definition resource that contains the detailed parsing instructions for a specific file format.\n\n**Field behavior**\n\n- **Purpose**: References an existing file definition resource in the system\n- **Format**: MongoDB ObjectId (24-character hexadecimal string)\n- **Requirement**: MANDATORY when type=\"filedefinition\"\n- **Validation**: Must reference a valid, accessible file definition\n\n**Understanding file definitions**\n\nA file definition is a separate resource that defines:\n\n1. **Record Structure**:\n    - Field names, positions, and data types\n    - Record identifiers and format specifications\n    - Parsing rules and field extraction logic\n\n2. **Processing Rules**:\n    - How to identify different record types\n    - How to handle headers, footers, and details\n    - Data validation and transformation rules\n\n3. **Format-Specific Settings**:\n    - For fixed-width: Character positions and field lengths\n    - For EDI: Segment identifiers and element separators\n    - For proprietary formats: Custom parsing instructions\n\n**Obtaining the correct id**\n\nTo identify the appropriate file definition ID:\n\n1. **System Administration**:\n    - Check with system administrators for a list of available file definitions\n    - Request the specific ID for the file format you need to process\n    - Verify the file definition's compatibility with your file format\n\n2. **File Definition Catalog**:\n    - If available, consult the file definition catalog in the system\n    - Search for definitions matching your file format requirements\n    - Note the ObjectId of the appropriate definition\n\n3. **Custom Definition Creation**:\n    - If no suitable definition exists, request creation of a new one\n    - Provide sample files and format specifications\n    - Obtain the new file definition's ID after creation\n\n**Implementation guidance for ai agents**\n\n**Recommendation Framework**\n\nWhen implementing a file definition-based export:\n\n1. **Verify Definition Existence**:\n    - Confirm the file definition exists before configuration\n    - Do not guess or generate random IDs\n    - Request specific ID from system administrators\n\n2. **Documentation Requirements**:\n    - Document which file definition is being used and why\n    - Note any specific requirements or limitations of the definition\n    - Record the mapping between file fields and integration needs\n\n3. **Testing Approach**:\n    - Recommend testing with sample files before production use\n    - Verify all required fields are correctly extracted\n    - Validate the parsing results meet integration requirements\n\n**Common File Definition Categories**\n\n| Category | Description | Example Formats |\n|----------|-------------|----------------|\n| Fixed-Width | Fields defined by character positions | Banking transactions, government reports |\n| EDI | Electronic Data Interchange standards | X12, EDIFACT, TRADACOMS |\n| Hierarchical | Complex parent-child structures | Specialized industry formats |\n| Multi-Record | Different record types in one file | Inventory systems, financial exports |\n| Proprietary | Custom or legacy system formats | Mainframe exports, specialized software |\n\n**Technical considerations**\n\n- File definitions are reusable across multiple exports\n- Changes to a file definition affect all exports using it\n- File definitions may have version dependencies\n- Some file definitions may require specific pre-processing settings\n- Performance impact varies based on definition complexity\n\n**Troubleshooting indicators**\n\nIf you encounter these issues, verify the file definition ID:\n\n- \"File definition not found\" errors\n- Unexpected field mapping or missing fields\n- Data type conversion errors\n- Parsing failures with specific record types\n\nAlways document the exact file definition ID with its purpose to facilitate troubleshooting and maintenance.\n"}}},"filter":{"allOf":[{"description":"Configuration for selectively processing files based on specified criteria. This object enables precise\ncontrol over which files are included or excluded from the export operation.\n\n**Filter behavior**\n\nWhen configured, the filter is applied before file processing begins:\n- Files that match the filter criteria are processed\n- Files that don't match are completely skipped\n- No partial file processing is performed\n\n**Available filter fields**\n\nThe specific fields available for file filtering are the contained in the `fileMeta` property.\n\n**Common Filter Fields**\n\nThese are the most commonly available fields across most file providers:\n\n1. **filename**: The name of the file (with extension)\n  - Example filter: Match files with specific extensions or naming patterns\n  - Usage: `[\"endswith\", [\"extract\", \"filename\"], \".csv\"]`\n\n2. **filesize**: The size of the file in bytes\n  - Example filter: Skip files that are too large or too small\n  - Usage: `[\"lessthan\", [\"number\", [\"extract\", \"filesize\"]], 1000000]`\n\n3. **lastmodified**: The last modification timestamp of the file\n  - Example filter: Process only files created/modified within a specific date range\n  - Usage: `[\"greaterthan\", [\"extract\", \"lastmodified\"], \"2023-01-01T00:00:00Z\"]`\n"},{"$ref":"#/components/schemas/Filter"}]},"backupPath":{"type":"string","description":"The file system path where backup files will be stored before processing. This path specifies a directory location where the system will create backup copies of files before they are processed by the export flow.\n\n**Backup mechanism overview**\n\nThe backup mechanism creates a copy of source files in the specified location before processing begins. This provides:\n\n- **Data Safety**: Preserves original files in case of processing errors\n- **Audit Trail**: Maintains historical record of exported data\n- **Recovery Option**: Enables reprocessing from original files if needed\n- **Compliance Support**: Helps meet data retention requirements\n\n**Path configuration guidelines**\n\nThe path format must follow these conventions:\n\n- **Absolute Paths**: Must start with \"/\" (Unix/Linux) or include drive letter (Windows)\n- **Relative Paths**: Interpreted relative to the application's working directory\n- **Network Paths**: Can use UNC format (\\\\server\\share\\path) or mounted network drives\n- **Access Requirements**: The path must be writable by the service account running the integration\n\n**Implementation strategy for ai agents**\n\nWhen configuring the backup path, consider these factors:\n\n1. **Storage Capacity Planning**:\n    - Estimate average file sizes and volumes\n    - Calculate required storage based on retention period\n    - Implement monitoring for storage utilization\n    - Plan for storage growth based on business projections\n\n2. **Path Selection Criteria**:\n    - Choose locations with sufficient disk space\n    - Ensure appropriate read/write permissions\n    - Select paths with reliable access (avoid temporary or volatile storage)\n    - Consider network latency for remote locations\n\n3. **Backup Naming Convention**:\n    - Default: Original filename with timestamp suffix\n    - Custom: Can be controlled through integration settings\n    - Avoid paths that may contain special characters that need escaping\n    - Consider filename length limitations of target filesystem\n\n4. **Security Considerations**:\n    - Restrict access to backup location to authorized personnel only\n    - Avoid public-facing directories\n    - Consider encryption for sensitive data backups\n    - Implement appropriate file permissions\n\n**Backup strategy recommendations**\n\n| Data Sensitivity | Recommended Approach | Path Considerations |\n|------------------|----------------------|---------------------|\n| Low | Local directory backup | Fast access, limited protection |\n| Medium | Network share with permissions | Balanced access/protection |\n| High | Secure storage with encryption | Highest protection, potential performance impact |\n| Regulated | Compliant storage with audit trail | Must meet specific regulatory requirements |\n\n**Integration patterns**\n\n**Temporary Processing Pattern**\n\nFor short-term processing needs:\n```\n/tmp/exports/backups\n```\n- Files stored temporarily during processing\n- Limited retention period\n- Optimized for processing speed\n- May be automatically cleaned up\n\n**Long-term Archival Pattern**\n\nFor regulatory or business retention requirements:\n```\n/archive/exports/2023/Q4\n```\n- Organized by time period\n- Structured for easy retrieval\n- May include additional metadata\n- Designed for long-term storage\n\n**Cloud Storage Pattern**\n\nFor scalable, managed storage:\n```\n/mnt/cloud/exports/client123\n```\n- Mounted cloud storage location\n- Potentially unlimited capacity\n- May include built-in versioning\n- Often includes automatic replication\n\n**Error handling guidance**\n\nWhen configuring backup paths, anticipate these common issues:\n\n- **Permission Denied**: Ensure service account has write access\n- **Path Not Found**: Verify directory exists or create it programmatically\n- **Disk Full**: Monitor storage capacity and implement alerts\n- **Path Too Long**: Be aware of filesystem path length limitations\n\n**Technical considerations**\n\n- Backup operations may impact performance for large files\n- Network paths may introduce latency and availability concerns\n- Some filesystems have case sensitivity differences (important for path matching)\n- Path separators vary by platform (/ vs \\)\n- Special characters in paths may require escaping in certain contexts\n- Consider implementing automatic cleanup policies for backups\n\n**System administration notes**\n\n- Backup paths should be included in system backup procedures\n- Monitor space utilization on backup volumes\n- Implement appropriate retention policies\n- Document backup path locations in system configuration\n- Consider periodic validation of backup file integrity\n"}}},"Http-2":{"type":"object","description":"Configuration for HTTP exports.\n\nIMPORTANT: When the _connectionId field points to a connection where the type is http, \nthis object MUST be populated for the export to function properly. This is a required configuration\nfor all HTTP based exports, as determined by the connection associated with the export.\n","properties":{"type":{"type":"string","enum":["file"],"description":"**Important:** This field should be LEFT UNDEFINED for the vast majority of HTTP exports.\n\nThis is an OPTIONAL field that should only be set in rare, specific cases. For standard REST API exports\n(Shopify, Salesforce, NetSuite, custom REST APIs, etc.), this field MUST be left undefined.\n\n**When to leave this field undefined (MOST common CASE)**\n\nLeave this field undefined for ALL standard data exports, including:\n- REST API exports that return JSON records\n- APIs that return XML records or structured data\n- Any export that retrieves business records, entities, or data objects\n- Standard CRUD operations that return record collections\n- GraphQL queries that return structured data\n- SOAP APIs that return structured responses\n\nExamples of exports that should have this field undefined:\n- \"Export all Shopify Customers\" → undefined (returns JSON customer records)\n- \"Retrieve orders from custom REST API\" → undefined (returns JSON order records)\n\n**When to set this field to 'file' (RARE use CASE)**\n\nSet this field to 'file' ONLY when the HTTP endpoint is specifically designed to download files:\n- The endpoint returns raw binary file content (PDFs, images, ZIP files, etc.)\n- The endpoint is a file download service (e.g., downloading invoices, reports, attachments)\n- The response body contains file data that needs to be saved as a file, not parsed as records\n- You need to download and process files from a remote server\n\nExamples of when to set type: \"file\":\n- \"Download PDF invoices from the API\" → type: \"file\"\n- \"Retrieve image files from a file server\" → type: \"file\"\n- \"Download CSV files from an FTP server via HTTP\" → type: \"file\"\n\n**Implementation details**\n\nWhen this field is set to 'file':\n- The 'file' object property MUST also be configured\n- The export appears as a \"Transfer\" step in the Flow Builder UI\n- The system applies file-specific processing to the HTTP response\n- Downstream steps receive file content rather than record data\n\nWhen this field is undefined (default for most exports):\n- The export appears as a standard \"Export\" step in the Flow Builder UI\n- The system parses the HTTP response as structured data (JSON, XML, etc.)\n- Downstream steps receive record data that can be mapped and transformed\n\n**Decision flowchart**\n\n1. Does the API endpoint return business records/entities (customers, orders, products, etc.)?\n   → YES: Leave this field undefined\n2. Does the API endpoint return structured data (JSON objects, XML records)?\n   → YES: Leave this field undefined\n3. Does the API endpoint return raw file content (PDFs, images, binary data)?\n   → YES: Set this field to \"file\" (and configure the 'file' property)\n\nRemember: When in doubt, leave this field undefined. Most HTTP exports are standard data exports.\n"},"method":{"type":"string","description":"HTTP method used for the export request to retrieve data from the target API.\n\n- GET: Most commonly used for data retrieval operations (default)\n- POST: Used when request body criteria are needed, especially for RPC or SOAP/XML APIs\n- PUT: Available for specific APIs that support it for data retrieval\n- PATCH/DELETE: Less common for exports but available for specialized use cases\n\nConsult your target API's documentation to determine the appropriate method.\n","enum":["GET","POST","PUT","PATCH","DELETE"]},"relativeURI":{"type":"string","description":"The resource path portion of the API endpoint used for this export.\n\nThis value is combined with the baseURI defined in the associated connection to form the complete API endpoint URL. \n\nThe entire relativeURI can be defined using handlebars expressions to create dynamic paths:\n\nExamples:\n- Simple resource paths: \"/products\", \"/orders\", \"/customers\"\n- With query parameters: \"/orders?status=pending\", \"/products?category=electronics&limit=100\"\n- With path parameters: \"/customers/{{record.customerId}}/orders\", \"/accounts/{{record.accountId}}/transactions\"\n- With dynamic query values: \"/orders?since={{lastExportDateTime}}\"\n- Fully dynamic path: \"{{record.dynamicPath}}\"\n\nPath parameters, query parameters, or the entire URI can be dynamically generated using handlebars syntax. This is particularly useful for parameterized API calls or when the endpoint needs to be determined at runtime based on data or context.\n\n**Lookup export behavior with mappings**\n\n**CRITICAL**: For lookup exports (isLookup: true) that have mappings configured, the handlebars template evaluation for relativeURI always uses the **original input record** before any mapping transformations are applied.\n\nThis design ensures that:\n- Mappings can transform the record structure for the request body without affecting URI construction\n- Essential fields like record IDs remain accessible for building dynamic endpoints\n- The request body can be optimized for the target API while preserving URI parameters\n\n**Example Scenario:**\n```\nInput record: {\"customerId\": \"12345\", \"name\": \"John Doe\", \"email\": \"john@example.com\"}\nMappings: Transform to {\"customer_name\": \"John Doe\", \"contact_email\": \"john@example.com\"}\nrelativeURI: \"/customers/{{record.customerId}}/details\"\nResult: \"/customers/12345/details\" (uses original customerId, not mapped version)\n```\n\nThis prevents situations where mapping transformations would remove or rename fields needed for endpoint construction, ensuring reliable API calls regardless of how the request body is structured.\n"},"headers":{"type":"array","description":"Export-specific HTTP headers to include with API requests. Note that common headers like authentication are typically defined on the connection record rather than here.\n\nUse this field only for headers that are specific to this particular export operation. Headers defined here will be merged with (and can override) headers from the connection.\n\nExamples of export-specific headers:\n- Accept: To request specific content format for this export only\n- X-Custom-Filter: Export-specific filtering parameters\n                \nHeader values can be defined using handlebars expressions if you need to reference any dynamic data or configurations.\n\nFor lookup exports (isLookup: true) with mappings configured, header value templates render against the **pre-mapped** record (the original input record from the upstream flow step) — mappings do not rewrite header evaluation.\n","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}}},"requestMediaType":{"type":"string","description":"Override request media type. Use this field to handle the use case where the HTTP request requires a different media type than what is configured on the connection.\n\nMost APIs use a consistent media type across all endpoints, which should be configured at the connection resource. Use this field only when:\n\n- This specific endpoint requires a different format than other endpoints in the API\n- You need to override the connection-level setting for this particular export only\n\nCommon values:\n- \"json\": For JSON request bodies (Content-Type: application/json)\n- \"xml\": For XML request bodies (Content-Type: application/xml)\n- \"urlencoded\": For URL-encoded form data (Content-Type: application/x-www-form-urlencoded)\n- \"form-data\": For multipart form data, typically used for file uploads\n- \"plaintext\": For plain text content\n","enum":["json","xml","urlencoded","form-data","plaintext"]},"body":{"type":"string","description":"The HTTP request body to send with POST, PUT, or PATCH requests. This field is typically used to:\n\n1. Send query parameters to APIs that require them in the request body (e.g., GraphQL or SOAP APIs)\n2. Provide filtering criteria for data exports\n\nThe body content must match the format specified in the requestMediaType field (JSON, XML, etc.).\n\nYou can use handlebars expressions to create dynamic content:\n```\n{\n  \"query\": \"SELECT Id, Name FROM Account WHERE LastModifiedDate > {{lastExportDateTime}}\",\n  \"parameters\": {\n    \"customerId\": \"{{record.customerId}}\",\n    \"limit\": 100\n  }\n}\n```\n\nFor XML or SOAP requests:\n```\n<request>\n  <filter>\n    <updatedSince>{{lastExportDateTime}}</updatedSince>\n    <type>{{record.type}}</type>\n  </filter>\n</request>\n```\n"},"successMediaType":{"type":"string","description":"Specifies the media type (content type) expected in successful responses for this specific export. This field should only be used when:\n\n1. The response format differs from the request format\n\nMost APIs return responses in the same format as the request, so this field is often unnecessary.\n\nCommon values:\n- \"json\": For JSON responses (typically with Content-Type: application/json)\n- \"xml\": For XML responses (typically with Content-Type: application/xml)\n- \"csv\": For CSV data (typically with Content-Type: text/csv)\n- \"plaintext\": For plain text responses\n","enum":["json","xml","csv","plaintext"]},"errorMediaType":{"type":"string","description":"Specifies the media type (content type) expected in error responses for this specific export. This field should only be used when:\n\n1. Error response format differs from the request format\n\nMost APIs return responses in the same format as the request, so this field is often unnecessary.\n\nCommon values:\n- \"json\": For JSON error responses (most common in modern APIs)\n- \"xml\": For XML error responses (common in SOAP and older REST APIs)\n- \"plaintext\": For plain text error messages\n","enum":["json","xml","plaintext"]},"_asyncHelperId":{"type":"string","format":"objectId","description":"Reference to an AsyncHelper resource that handles polling for long-running API operations.\n\nAsync helpers bridge Celigo's synchronous flow engine with asynchronous external APIs that use a \"fire-and-check-back\" pattern (HTTP 202 responses, job tickets, feed/document IDs, etc.).\n\nUse this field when the export needs to:\n- Submit a request to an API that processes data asynchronously\n- Poll for status at configured intervals\n- Retrieve results once the external process completes\n\nCommon use cases include:\n- Amazon SP-API feeds\n- Large report generators\n- File conversion services\n- Image processors\n- Any API that needs minutes or hours to complete a requested operation\n"},"once":{"type":"object","description":"HTTP configuration specific to Once exports. Used to mark records as exported after successful processing.","properties":{"relativeURI":{"type":"string","description":"The relative URI used to mark records as exported. Called as a callback to the source system after successful processing.\n\n- Must be a relative path starting with \"/\"\n- Can include Handlebars variables: \"/orders/{{record.Id}}/exported\"\n- Common patterns: dedicated status endpoint or record-specific updates\n- Renders against the **pre-mapped** record (original extracted record); mappings do not apply to the callback URI.\n"},"method":{"type":"string","description":"The HTTP method used when calling back to mark records as exported.","enum":["GET","PUT","POST","PATCH","DELETE"]},"body":{"type":"string","description":"The HTTP request body used when calling back to mark records as exported. Can include Handlebars expressions for dynamic values."}}},"paging":{"type":"object","description":"Configuration object for navigating through multi-page API responses.\n\n**Overview for ai agents**\n\nThis object is critical for retrieving large datasets that cannot be returned in a single API response.\nThe pagination implementation determines how the system will retrieve subsequent pages of data after\nthe first request, enabling complete data collection regardless of volume.\n\n**Key decision points**\n\n1. **Identify the API's pagination mechanism** (check API documentation)\n2. **Select the corresponding method** value (most important field)\n3. **Configure the required fields** based on your selected method\n4. **Add pagination variables** to your request configuration\n5. **Consider last page detection** options if needed\n\n**Field dependencies by pagination method**\n\n1. **page**: Page number-based pagination (e.g., ?page=2)\n    - Required: Set `method` to \"page\"\n    - Optional: `page` - Set if first page index is not 0 (e.g., set to 1 for APIs that start at page 1)\n    - Optional: `maxPagePath` - Path to find total pages in response\n    - Optional: `maxCountPath` - Path to find total records in response\n    - Optional: `relativeURI` - Only if subsequent page URLs differ from first page\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n2. **skip**: Offset/limit pagination (e.g., ?offset=100&limit=50)\n    - Required: Set `method` to \"skip\"\n    - Optional: `skip` - Set if first skip index is not 0\n    - Optional: `maxPagePath` - Path to find total pages in response\n    - Optional: `maxCountPath` - Path to find total records in response\n    - Optional: `relativeURI` - Only if subsequent page URLs differ from first page\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n3. **token**: Token-based pagination (e.g., ?page_token=abc123)\n    - Required: Set `method` to \"token\"\n    - Required: `path` - Location of the token in the response\n    - Required: `pathLocation` - Whether token is in \"body\" or \"header\"\n    - Optional: `token` - Set to provide initial token (rare)\n    - Optional: `pathAfterFirstRequest` - Only if token location changes after first page\n    - Optional: `relativeURI` - Only if subsequent page URLs differ from first page\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n4. **linkheader**: Link header pagination (uses HTTP Link header with rel values)\n    - Required: Set `method` to \"linkheader\"\n    - Optional: `linkHeaderRelation` - Set if relation is not the default \"next\"\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n5. **nextpageurl**: Complete next URL in response\n    - Required: Set `method` to \"nextpageurl\"\n    - Required: `path` - Location of the next URL in the response\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n6. **relativeuri**: Custom relative URI pagination\n    - Required: Set `method` to \"relativeuri\"\n    - Required: `relativeURI` - Configure using handlebars with previous_page context\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n7. **body**: Custom request body pagination\n    - Required: Set `method` to \"body\"\n    - Required: `body` - Configure using handlebars with previous_page context\n    - Optional: `resourcePath` - Only if records location changes in follow-up responses\n\n**Pagination variables**\n\nBased on your selected method, you MUST add one of these variables to your request configuration:\n\n- For page-based: Add `{{export.http.paging.page}}` to the URI or body\n- For offset-based: Add `{{export.http.paging.skip}}` to the URI or body\n- For token-based: Add `{{export.http.paging.token}}` to the URI or body\n\n**Last page detection options**\n\nThese fields can be used with any pagination method to detect the last page:\n\n- `lastPageStatusCode` - Detect last page by HTTP status code\n- `lastPagePath` - JSON path to check for last page indicator\n- `lastPageValues` - Values at lastPagePath that indicate last page\n\n**Common implementation patterns**\n\nMost APIs require only 2-3 fields to be configured. The most common patterns are:\n\n```json\n// Page-based pagination (starting at page 1)\n{\n  \"method\": \"page\",\n  \"page\": 1\n}\n\n// Token-based pagination\n{\n  \"method\": \"token\",\n  \"path\": \"meta.nextToken\",\n  \"pathLocation\": \"body\"\n}\n\n// Link header pagination (simplest to configure)\n{\n  \"method\": \"linkheader\"\n}\n```\n\nIMPORTANT: Incorrect pagination configuration is one of the most common causes of incomplete data retrieval. Take time to properly identify and configure the correct pagination method for your API.\n","properties":{"method":{"type":"string","description":"Defines the pagination strategy that will be used to retrieve all data pages.\n\n**Importance for** AI AGENTS\n\nThis is the MOST CRITICAL field in pagination configuration. It determines:\n- Which other fields are required vs. optional\n- How subsequent pages will be requested\n- Which pagination variables must be used in requests\n- How the system detects the last page\n\n**Pagination methods and their requirements**\n\n**Page-Based** Pagination (`\"page\"`)\n```\n\"method\": \"page\"\n```\n- **Implementation**: Uses increasing page numbers (e.g., ?page=1, ?page=2)\n- **Required Setup**: Add `{{export.http.paging.page}}` to your URI or body\n- **Common Fields**: page (if starting at 1 instead of 0)\n- **API Examples**: Most REST APIs, Shopify, WordPress\n- **When to Use**: APIs that accept a page number parameter\n\n**Offset/Skip Pagination (`\"skip\"`)**\n```\n\"method\": \"skip\"\n```\n- **Implementation**: Uses increasing offset values (e.g., ?offset=0, ?offset=100)\n- **Required Setup**: Add `{{export.http.paging.skip}}` to your URI or body\n- **Common Fields**: Usually none (system handles offset increments)\n- **API Examples**: MongoDB, SQL-based APIs\n- **When to Use**: APIs that use offset/limit or skip/limit parameters\n\n**Token-Based Pagination (`\"token\"`)**\n```\n\"method\": \"token\"\n```\n- **Implementation**: Passes tokens from previous responses to get next pages\n- **Required Setup**: \n    1. Add `{{export.http.paging.token}}` to your URI or body\n    2. Set path to location of token in response\n    3. Set pathLocation to \"body\" or \"header\"\n- **API Examples**: AWS, Google Cloud, modern REST APIs\n- **When to Use**: APIs that provide continuation tokens/cursors\n\n**Link Header Pagination (`\"linkheader\"`)**\n```\n\"method\": \"linkheader\"\n```\n- **Implementation**: Follows URLs in HTTP Link headers automatically\n- **Required Setup**: None (simplest to configure)\n- **Common Fields**: Usually none (automatic)\n- **API Examples**: GitHub, GitLab, any API following RFC 5988\n- **When to Use**: APIs that return Link headers with rel=\"next\"\n\n**Next Page url (`\"nextpageurl\"`)**\n```\n\"method\": \"nextpageurl\"\n```\n- **Implementation**: Uses complete URLs returned in response body\n- **Required Setup**: Set path to location of next URL in response\n- **API Examples**: Some social media APIs, GraphQL implementations\n- **When to Use**: APIs that include complete next page URLs in responses\n\n**Custom relative URI (`\"relativeuri\"`)**\n```\n\"method\": \"relativeuri\"\n```\n- **Implementation**: Builds custom URIs based on previous responses\n- **Required Setup**: Configure relativeURI with handlebars templates\n- **When to Use**: Non-standard pagination requiring custom logic\n\n**Custom Request Body (`\"body\"`)**\n```\n\"method\": \"body\"\n```\n- **Implementation**: Creates custom request bodies for pagination\n- **Required Setup**: Configure body with handlebars templates\n- **API Examples**: GraphQL, SOAP, RPC APIs\n- **When to Use**: APIs requiring POST requests with pagination in body\n\n**Selection guidance**\n\nTo determine the correct method:\n1. Check the API documentation for pagination instructions\n2. Look for examples of multi-page requests in API samples\n3. Test with a small request to observe pagination mechanics\n4. Choose the method matching the API's expected behavior\n\nIMPORTANT: Using the wrong pagination method will result in either errors or incomplete data retrieval.\n","enum":["linkheader","page","skip","token","nextpageurl","relativeuri","body"]},"page":{"type":"integer","description":"Specifies the starting page number for page-based pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"page\"\n- OPTIONAL: Defaults to 0 if not provided\n- COMMON VALUES: 1 (most APIs), 0 (zero-indexed APIs)\n\n**Implementation guidance**\n\nThis field should be set when the API's first page is not zero-indexed. Most APIs use 1 as \ntheir first page number, in which case you should set:\n\n```json\n{\n  \"method\": \"page\",\n  \"page\": 1\n}\n```\n\nThe system will automatically increment this value for each subsequent page request.\n\n**Examples**\n\n- Shopify uses page=1 for first page\n- Some GraphQL APIs use page=0 for first page\n"},"skip":{"type":"integer","description":"Specifies the starting offset value for offset/skip-based pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"skip\"\n- OPTIONAL: Defaults to 0 if not provided\n- COMMON VALUES: 0 (vast majority of APIs)\n\n**Implementation guidance**\n\nThis field rarely needs to be set since most APIs use 0 as the starting offset.\nThe system will automatically increment this value by the pageSize for each subsequent request.\n\nExample calculation for page transitions:\n- First page: offset=0 (or your configured value)\n- Second page: offset=pageSize\n- Third page: offset=pageSize*2\n\n**When to use**\n\nOnly set this if the API requires a non-zero starting offset value, which is very uncommon.\n"},"token":{"type":"string","description":"Specifies an initial token value for token-based pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"token\"\n- OPTIONAL: Leave empty for normal pagination from the beginning\n- ADVANCED USE ONLY: Most implementations should NOT set this\n\n**Implementation guidance**\n\nToken-based pagination normally works by:\n1. Making the first request with no token\n2. Extracting a token from the response (using the path field)\n3. Using that token for the next request\n\nThis field should ONLY be set in rare scenarios:\n- Resuming a previous pagination sequence from a known token\n- APIs that require a token value even for the first request\n- Testing specific pagination scenarios\n\n**Example scenarios**\n\n```json\n// To resume pagination from a specific point:\n{\n  \"method\": \"token\",\n  \"path\": \"meta.nextToken\",\n  \"pathLocation\": \"body\",\n  \"token\": \"eyJwYWdlIjozfQ==\"\n}\n\n// For APIs requiring an initial token:\n{\n  \"method\": \"token\",\n  \"path\": \"pagination.nextToken\",\n  \"pathLocation\": \"body\",\n  \"token\": \"start\"\n}\n```\n"},"path":{"type":"string","description":"Specifies the location of pagination information in API responses.\n\n**Field behavior**\n\nThis field has different requirements based on the pagination method:\n\n- REQUIRED for method=\"token\":\n  Indicates where to find the token for the next page\n\n- REQUIRED for method=\"nextpageurl\":\n  Indicates where to find the complete URL for the next page\n\n- NOT USED for other pagination methods\n\n**Implementation guidance**\n\n**For token-based pagination (method=\"token\")**\n\n1. When pathLocation=\"body\":\n    - Set to a JSON path that points to the token in the response body\n    - Uses dot notation to navigate JSON objects\n    \n    Example response:\n    ```json\n    {\n      \"data\": [...],\n      \"meta\": {\n        \"nextToken\": \"abc123\"\n      }\n    }\n    ```\n    Correct path: \"meta.nextToken\"\n\n2. When pathLocation=\"header\":\n    - Set to the exact name of the HTTP header containing the token\n    - Case-sensitive, must match the header exactly\n    \n    Example header:\n    ```\n    X-Pagination-Token: abc123\n    ```\n    Correct path: \"X-Pagination-Token\"\n\n**For next page url pagination (method=\"nextpageurl\")**\n\n- Set to a JSON path that points to the complete URL in the response\n\nExample response:\n```json\n{\n  \"data\": [...],\n  \"pagination\": {\n    \"next_url\": \"https://api.example.com/data?page=2\"\n  }\n}\n```\nCorrect path: \"pagination.next_url\"\n\n**Common error** PATTERNS\n\n1. Missing dot notation: \"meta.nextToken\" not \"meta/nextToken\"\n2. Incorrect case: \"Meta.NextToken\" when API returns \"meta.nextToken\"\n3. Missing array indices when needed: \"items[0].next\" not \"items.next\"\n"},"pathLocation":{"type":"string","description":"Specifies where to find the pagination token in the API response.\n\n**Field behavior**\n\n- REQUIRED for method=\"token\"\n- NOT USED for other pagination methods\n- LIMITED to two possible values: \"body\" or \"header\"\n\n**Implementation guidance**\n\nWhen using token-based pagination, you must:\n1. Set method=\"token\"\n2. Set path to locate the token\n3. Set pathLocation to indicate where the token is found\n\n**When to use** \"body\":\n\nSet to \"body\" when the token is contained in the JSON response body.\nThis is the most common scenario for modern APIs.\n\nExample configuration:\n```json\n{\n  \"method\": \"token\",\n  \"path\": \"metadata.nextToken\",\n  \"pathLocation\": \"body\"\n}\n```\n\n**When to use \"header\"**\n\nSet to \"header\" when the token is returned as an HTTP header.\n\nExample configuration:\n```json\n{\n  \"method\": \"token\",\n  \"path\": \"X-Next-Page-Token\",\n  \"pathLocation\": \"header\" \n}\n```\n\n**Dependency chain**\n\nThis field participates in a critical dependency chain:\n\n1. Set method=\"token\"\n2. Set pathLocation=\"body\" or \"header\"\n3. Set path to token location based on pathLocation value\n4. Add {{export.http.paging.token}} to URI or body parameters\n\nAll four elements must be properly configured for token pagination to work.\n","enum":["body","header"]},"pathAfterFirstRequest":{"type":"string","description":"Specifies an alternative path for token extraction after the first page request.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"token\"\n- OPTIONAL: Only needed when token location changes after first page\n- Uses same format as the path field (JSON path or header name)\n\n**Implementation guidance**\n\nThis field should only be set when the API changes its response structure between\nthe first page and subsequent pages. Most APIs maintain consistent structure, but\nsome APIs may:\n\n1. Use different response formats for first vs. subsequent pages\n2. Move the token to a different location after the initial response\n3. Change the field name for the token in follow-up responses\n\nExample scenario where this is needed:\n```json\n// First page response:\n{\n  \"data\": [...],\n  \"meta\": {\n    \"initialNextToken\": \"abc123\"\n  }\n}\n\n// Subsequent page responses:\n{\n  \"data\": [...],\n  \"pagination\": {\n    \"nextToken\": \"def456\"\n  }\n}\n```\n\nIn this case:\n- path = \"meta.initialNextToken\" (for first page)\n- pathAfterFirstRequest = \"pagination.nextToken\" (for subsequent pages)\n\n**Dependency chain**\n\nThis field works in conjunction with the main path field:\n1. First request: token is extracted using the path field\n2. Subsequent requests: token is extracted using pathAfterFirstRequest\n\nIMPORTANT: Only set this field if you've verified that the API actually changes\nits response structure. Setting it unnecessarily can cause pagination to fail.\n"},"relativeURI":{"type":"string","description":"Override relative URI for subsequent page requests. This field appears as \"Override relative URI for subsequent page requests\" in the UI.\n\nThis field only needs to be set if subsequent page requests require a different relative URI than what is configured in the primary relative URI field. Most APIs use the same endpoint for all pages and vary only the query parameters, but some may require a completely different path for subsequent requests.\n\nYou can use handlebars expressions to reference data from the previous API response using the `previous_page` context object, which contains:\n\n- `previous_page.full_response` - The entire JSON response body from the previous request\n- `previous_page.last_record` - The last record from the previous page of results\n- `previous_page.headers` - All HTTP headers from the previous response\n\nCommon patterns include:\n- `{{previous_page.full_response.next_page}}` - Use a complete next page URL returned by the API\n- `/customers?page={{previous_page.full_response.page_count}}` - Use a page number from the response\n- `/orders?cursor={{previous_page.full_response.next_cursor}}` - Use a cursor/token from the response\n\nThe exact structure of data available depends on your specific API's response format.\n\nLeave this field empty if the main relative URI can be used for all page requests.\n"},"body":{"type":"string","description":"Override HTTP request body for subsequent page requests. This field appears as \"Override HTTP request body for subsequent page requests\" in the UI.\n\nThis field only needs to be set if subsequent page requests require a different HTTP request body than what is configured in the primary HTTP request body field. Most APIs use query parameters for pagination, but some (especially GraphQL or SOAP APIs) may require pagination parameters to be sent in the request body.\n\nYou can use handlebars expressions to reference data from the previous API response using the `previous_page` context object, which contains:\n\n- `previous_page.full_response` - The entire JSON response body from the previous request\n- `previous_page.last_record` - The last record from the previous page of results\n- `previous_page.headers` - All HTTP headers from the previous response\n\nCommon patterns include:\n- Including the next cursor in a GraphQL query: `{\"query\": \"...\", \"variables\": {\"cursor\": \"{{previous_page.full_response.pageInfo.endCursor}}\"}}`\n- Using the last record's ID: `{\"after\": \"{{previous_page.last_record.id}}\", \"limit\": 100}`\n- Including a page number: `{\"page\": {{previous_page.full_response.meta.next_page}}, \"pageSize\": 50}`\n\nThe exact structure of data available depends on your specific API's response format.\n\nLeave this field empty if the main HTTP request body can be used for all page requests.\n"},"linkHeaderRelation":{"type":"string","description":"Specifies which relation in the Link header to use for pagination.\n\n**Field behavior**\n\n- RELEVANT ONLY for method=\"linkheader\"\n- OPTIONAL: Defaults to \"next\" if not provided\n- Case-sensitive value matching the rel attribute in Link header\n\n**Implementation guidance**\n\nLink header pagination follows the RFC 5988 standard where pagination links\nare provided in HTTP headers. A typical Link header looks like:\n\n```\nLink: <https://api.example.com/items?page=2>; rel=\"next\", <https://api.example.com/items?page=1>; rel=\"prev\"\n```\n\nThis field allows you to specify which relation type to follow for pagination:\n\n```\n\"linkHeaderRelation\": \"next\"  // Default value\n```\n\nSome APIs use non-standard relation names, which is when you'd need to change this:\n\n```\n\"linkHeaderRelation\": \"successor\"  // Custom relation name\n```\n\n**Common values**\n\n- \"next\" (default): Standard for most RFC 5988 compliant APIs\n- \"successor\": Alternative used by some APIs\n- \"forward\": Alternative used by some APIs\n- \"nextpage\": Non-standard but used by some implementations\n\nIMPORTANT: This is case-sensitive and must exactly match the relation value in\nthe Link header. If the API includes the prefix \"rel=\" in the header, do NOT\ninclude it here.\n"},"resourcePath":{"type":"string","description":"Override path to records for subsequent page requests. This field appears as \"Override path to records for subsequent page requests\" in the UI.\n\nThis field only needs to be set if subsequent page requests return a different response structure, and the records are located in a different place than the original request.\n\nFor example, if the first request returns records in a structure like {\"data\": [...]} but subsequent page responses have records in {\"results\": [...]} instead, you would set this field to \"results\" to correctly extract data from the follow-up pages.\n\nLeave this field empty if all pages use the same response structure.\n"},"lastPageStatusCode":{"type":"integer","description":"Specifies a custom HTTP status code that indicates the last page of results.\n\n**Field behavior**\n\n- OPTIONAL: Only needed for APIs with non-standard last page indicators\n- Applies to all pagination methods\n- Overrides the default 404 end-of-pagination detection\n\n**Implementation guidance**\n\nBy default, the system treats a 404 status code as an indicator that\npagination is complete. This field allows you to specify a different\nstatus code if your API uses an alternative convention.\n\nCommon scenarios where this is needed:\n\n1. APIs that return 204 (No Content) for empty result sets\n```\n\"lastPageStatusCode\": 204\n```\n\n2. APIs that return 400 (Bad Request) when requesting beyond available pages\n```\n\"lastPageStatusCode\": 400\n```\n\n3. APIs with custom error codes for pagination completion\n```\n\"lastPageStatusCode\": 499\n```\n\n**Technical details**\n\nWhen this status code is received, the system:\n- Stops the pagination process\n- Considers the data collection complete\n- Does not treat the response as an error\n- Does not attempt to process any response body\n\nIMPORTANT: Only set this if your API explicitly uses a non-404 status code\nto indicate the end of pagination. Setting this incorrectly could cause\npremature termination of data collection or error handling issues.\n"},"lastPagePath":{"type":"string","description":"Specifies a JSON path to a field that indicates the end of pagination.\n\n**Field behavior**\n\n- OPTIONAL: Only needed for APIs with field-based pagination completion signals\n- Works with all pagination methods\n- Used in conjunction with lastPageValues\n- JSON path notation to a field in the response body\n\n**Implementation guidance**\n\nThis field is used when an API indicates the last page through a field\nin the response body rather than using HTTP status codes. The system\nchecks this path in each response to determine if pagination is complete.\n\nCommon patterns include:\n\n1. Boolean flag fields\n```\n\"lastPagePath\": \"meta.isLastPage\"\n```\n\n2. \"Has more\" indicators\n```\n\"lastPagePath\": \"pagination.hasMore\"\n```\n\n3. Cursor/token fields that are null/empty on the last page\n```\n\"lastPagePath\": \"meta.nextCursor\"\n```\n\n4. Error message fields\n```\n\"lastPagePath\": \"error.message\"\n```\n\n**Dependency chain**\n\nThis field must be used with lastPageValues, which specifies the value(s)\nat this path that indicate pagination is complete. For example:\n\n```json\n\"lastPagePath\": \"pagination.hasMore\",\n\"lastPageValues\": [\"false\", \"0\"]\n```\n\nIMPORTANT: The path is evaluated against each response using JSON path notation.\nIf the path doesn't exist in the response, the condition is not considered met.\n"},"lastPageValues":{"type":"array","description":"Specifies which value(s) at the lastPagePath indicate the end of pagination.\n\n**Field behavior**\n\n- REQUIRED when lastPagePath is used\n- Array of string values (even for boolean or numeric comparisons)\n- Case-sensitive matching against the value at lastPagePath\n- Multiple values create an OR condition (any match indicates last page)\n\n**Implementation guidance**\n\nThis field works in conjunction with lastPagePath to determine when\npagination is complete. The system looks for the field specified by\nlastPagePath and compares its value against each entry in this array.\n\nCommon patterns include:\n\n1. For boolean \"isLastPage\" flags (true means last page)\n```json\n\"lastPagePath\": \"meta.isLastPage\",\n\"lastPageValues\": [\"true\"]\n```\n\n2. For \"hasMore\" flags (false means last page)\n```json\n\"lastPagePath\": \"pagination.hasMore\",\n\"lastPageValues\": [\"false\", \"0\"]\n```\n\n3. For empty cursors (null/empty string means last page)\n```json\n\"lastPagePath\": \"meta.nextCursor\",\n\"lastPageValues\": [\"null\", \"\"]\n```\n\n4. For specific error messages\n```json\n\"lastPagePath\": \"error.message\",\n\"lastPageValues\": [\"No more pages\", \"End of results\"]\n```\n\n**Technical details**\n\n- All values must be specified as strings, even for boolean or numeric comparisons\n- JSON null should be represented as the string \"null\"\n- Empty string is represented as \"\"\n- The comparison is exact and case-sensitive\n\nIMPORTANT: This field is only considered when the lastPagePath exists in the\nresponse. Both lastPagePath and lastPageValues must be configured correctly\nfor proper pagination termination.\n","items":{"type":"string"}},"maxPagePath":{"type":"string","description":"Specifies a JSON path to a field containing the total number of pages available.\n\n**Field behavior**\n\n- OPTIONAL: Only relevant for \"page\" and \"skip\" pagination methods\n- JSON path to a numeric field in the response\n- Used to optimize pagination by detecting the last page early\n- Ignored for other pagination methods\n\n**Implementation guidance**\n\nThis field enables pagination optimization when an API includes metadata\nabout the total number of pages. When configured, the system:\n\n1. Extracts the total page count from each response\n2. Compares the current page number against this total\n3. Stops pagination when the maximum page is reached\n\nCommon API response patterns include:\n\n```json\n// Pattern 1: Metadata section with page counts\n{\n  \"data\": [...],\n  \"meta\": {\n    \"totalPages\": 5,\n    \"currentPage\": 2\n  }\n}\n\n// Pattern 2: Pagination object\n{\n  \"results\": [...],\n  \"pagination\": {\n    \"pageCount\": 5,\n    \"page\": 2\n  }\n}\n\n// Pattern 3: Root level pagination info\n{\n  \"items\": [...],\n  \"pages\": 5,\n  \"current\": 2\n}\n```\n\n**Usage scenarios**\n\nMost useful when:\n- The API reliably includes total page counts\n- You want to prevent unnecessary requests after the last page\n- The 404/last page detection mechanisms aren't suitable\n\nIMPORTANT: This field should point to the TOTAL number of pages,\nnot the current page number. The value must be numeric (integer).\n"},"maxCountPath":{"type":"string","description":"Specifies a JSON path to a field containing the total number of records available.\n\n**Field behavior**\n\n- OPTIONAL: Only relevant for \"page\" and \"skip\" pagination methods\n- JSON path to a numeric field in the response\n- Alternative to maxPagePath for record-based termination\n- Used when APIs provide total record count instead of page count\n\n**Implementation guidance**\n\nThis field enables pagination optimization when an API includes metadata\nabout the total number of records rather than pages. When configured,\nthe system:\n\n1. Extracts the total record count from each response\n2. Tracks the total number of records processed so far\n3. Stops pagination when all records have been processed\n\nCommon API response patterns include:\n\n```json\n// Pattern 1: Metadata section with record counts\n{\n  \"data\": [...],\n  \"meta\": {\n    \"totalCount\": 42,\n    \"page\": 2,\n    \"pageSize\": 10\n  }\n}\n\n// Pattern 2: Pagination object\n{\n  \"results\": [...],\n  \"pagination\": {\n    \"total\": 42,\n    \"offset\": 20,\n    \"limit\": 10\n  }\n}\n\n// Pattern 3: Root level count info\n{\n  \"items\": [...],\n  \"count\": 42,\n  \"page\": 2\n}\n```\n\n**Relationship with maxPagePath**\n\nThis field is an alternative to maxPagePath:\n- Use maxPagePath when the API provides a total page count\n- Use maxCountPath when the API provides a total record count\n- If both are provided, maxPagePath takes precedence\n\nIMPORTANT: This field should point to the TOTAL number of records,\nnot the number of records in the current page. The value must be\nnumeric (integer).\n"}}},"response":{"type":"object","description":"Configuration for parsing and interpreting HTTP responses returned by the source API.\n\nThis object tells the export engine how to extract records from the API response body\nand how to detect success or failure at the response level.\n\n**Most important field:** resourcePath\n\n`resourcePath` is the single most commonly needed field in this object. When an API\nwraps its records inside a JSON envelope, you MUST set resourcePath to the dot-path\nthat points to the array of records. Without it, the export treats the entire response\nas a single record.\n\nExample API response:\n```json\n{\n  \"status\": \"ok\",\n  \"data\": {\n    \"customers\": [\n      {\"id\": 1, \"name\": \"Alice\"},\n      {\"id\": 2, \"name\": \"Bob\"}\n    ]\n  }\n}\n```\n→ Set `resourcePath` to `data.customers` so the export produces 2 records.\n\n**When to leave this object undefined**\n\nIf the API returns a bare JSON array (e.g. `[{\"id\":1}, {\"id\":2}]`) with no\nwrapper object, you do not need this object at all.\n","properties":{"resourcePath":{"type":"string","description":"The dot-separated path to the array of records inside the API response body.\n\n**Critical field for correct data extraction**\n\nMost APIs wrap their data in an envelope object. This field tells the export\nwhere to find the actual records within that envelope. Without this field,\nthe export treats the entire response body as a single record, which is\nalmost never the desired behavior when the response has a wrapper.\n\n**How it works**\n\nGiven an API response like:\n```json\n{\n  \"meta\": {\"page\": 1, \"total\": 42},\n  \"results\": [\n    {\"id\": \"A\", \"value\": 10},\n    {\"id\": \"B\", \"value\": 20}\n  ]\n}\n```\nSetting `resourcePath` to `results` causes the export to produce 2 records\n(`{\"id\":\"A\",\"value\":10}` and `{\"id\":\"B\",\"value\":20}`).\n\nFor deeply nested responses:\n```json\n{\n  \"slideshow\": {\n    \"slides\": [{\"title\": \"Slide 1\"}, {\"title\": \"Slide 2\"}]\n  }\n}\n```\nSet `resourcePath` to `slideshow.slides` to get each slide as a record.\n\n**When to set this field**\n\n- The API response is a JSON object (not a bare array) and the records are\n  nested inside it → set this to the path\n- The API response is a bare JSON array → leave undefined (records are\n  already at the top level)\n\n**Common patterns**\n\n| API response structure | resourcePath value |\n|---|---|\n| `{\"data\": [...]}` | `data` |\n| `{\"results\": [...]}` | `results` |\n| `{\"items\": [...]}` | `items` |\n| `{\"records\": [...]}` | `records` |\n| `{\"response\": {\"data\": [...]}}` | `response.data` |\n| `{\"slideshow\": {\"slides\": [...]}}` | `slideshow.slides` |\n| `[...]` (bare array) | leave undefined |\n\n**Important distinction**\n\nThis field extracts records from the **API response**. Do NOT confuse it with:\n- `oneToMany` + `pathToMany` — which unwrap child arrays from *input records*\n  in lookup/import steps (a completely different mechanism)\n- `paging.resourcePath` — which overrides the record location for *subsequent*\n  page requests only (when follow-up pages use a different response structure)\n"},"resourceIdPath":{"type":"string","description":"Path to the unique identifier field within each individual record in the response.\n\nUsed primarily when processing results of asynchronous import responses.\nIf not specified, the system looks for standard `id` or `_id` fields automatically.\n"},"successPath":{"type":"string","description":"Path to a field in the response that indicates whether the API call succeeded.\n\nUse this when the API returns HTTP 200 for all requests but signals success or\nfailure through a field in the response body.\n\nMust be used together with `successValues` to define which values at this path\nindicate success.\n\nExample: If the API returns `{\"status\": \"ok\", \"data\": [...]}`, set\n`successPath` to `status` and `successValues` to `[\"ok\"]`.\n"},"successValues":{"type":"array","items":{"type":"string"},"description":"Values at the `successPath` location that indicate the API call was successful.\n\nWhen the value at `successPath` matches any entry in this array, the response\nis treated as successful. If the value does not match, the response is treated\nas an error.\n\nAll values are compared as strings. For boolean fields, use `\"true\"` or `\"false\"`.\n"},"errorPath":{"type":"string","description":"Path to the error message field in the response body.\n\nUsed to extract a meaningful error message when the API returns an error\nresponse. The value at this path is included in error logs and error records.\n"},"failPath":{"type":"string","description":"Path to a field that identifies a failed response even when the HTTP status code is 200.\n\nSimilar to `successPath` but inverted logic — checks for failure indicators.\nMust be used together with `failValues`.\n"},"failValues":{"type":"array","items":{"type":"string"},"description":"Values at the `failPath` location that indicate the API call failed.\n\nWhen the value at `failPath` matches any entry in this array, the response\nis treated as a failure even if the HTTP status code was 200.\n"},"blobFormat":{"type":"string","description":"Character encoding for blob export responses.\n\nOnly relevant when the export type is \"blob\" (http.type = \"file\" or\nexport type = \"blob\"). Specifies how to decode the binary response body.\n","enum":["utf8","ucs2","utf-16le","ascii","binary","base64","hex"]}}},"_httpConnectorVersionId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector version being used."},"_httpConnectorResourceId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector resource being used."},"sendAuthForFileDownloads":{"type":"boolean","description":"Whether to include authentication headers when downloading files."},"_httpConnectorEndpointId":{"type":"string","format":"objectId","description":"_httpConnectorEndpointId: Identifier for the HTTP connector endpoint used in the integration or API call. This identifier uniquely specifies which HTTP connector endpoint configuration should be utilized to route and process the request within the system. It ensures that API calls are directed to the correct external or internal HTTP service endpoint as defined in the integration setup.\n\n**Field behavior**\n- Uniquely identifies the HTTP connector endpoint within the system to ensure precise routing.\n- Used during the execution of integrations or API calls to select the appropriate HTTP endpoint configuration.\n- Typically required and must be specified when configuring or invoking HTTP-based connectors.\n- Once set, it generally remains immutable to maintain consistent routing behavior.\n- Acts as a key reference to the endpoint’s configuration, including URL, headers, authentication, and other settings.\n\n**Implementation guidance**\n- Must correspond to a valid and existing HTTP connector endpoint identifier within the system.\n- Validate the identifier against the current list of configured HTTP connector endpoints before use to prevent errors.\n- Avoid changing the identifier after initial assignment to prevent routing inconsistencies.\n- Ensure that the endpoint configuration referenced by this ID is active and properly configured.\n- Implement error handling for cases where the identifier does not match any existing endpoint.\n\n**Examples**\n- \"endpoint_12345\"\n- \"httpConnectorEndpoint_abcde\"\n- \"prodHttpEndpoint01\"\n- \"dev_api_gateway_2024\"\n- \"externalServiceEndpoint-01\"\n\n**Important notes**\n- This field is critical for directing API calls to the correct HTTP endpoint; incorrect values can cause failed connections or routing errors.\n- Missing or invalid identifiers will typically result in integration failures or exceptions.\n- Proper permissions and access controls must be in place to use the specified HTTP connector endpoint.\n- Changes to the endpoint configuration referenced by this ID may affect all integrations relying on it.\n- The identifier should be managed carefully in deployment and version control processes.\n\n**Dependency chain**\n- Depends on the existence and configuration of HTTP connector endpoints within the system.\n- May be linked to authentication, authorization, and security settings associated with the endpoint.\n- Integration logic or API call workflows depend on this identifier to resolve the correct HTTP endpoint.\n- Changes in endpoint configurations or identifiers may require updates in dependent integrations.\n\n**Technical details**\n- Data type: String\n- Format: Alphanumeric identifier, typically allowing underscores (_), dashes (-), and possibly other URL-safe characters.\n- Stored as a reference or foreign key linking to the HTTP connector"}}},"Salesforce-3":{"type":"object","description":"Configuration object for Salesforce data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a Salesforce connection\nand must not be included for other connection types. It defines how data is extracted\nfrom Salesforce, either through queries or real-time events.\n\nFor optimal AI agent implementation, consider these guidelines:\n\n**Salesforce export modes**\n\nSalesforce exports offer two fundamentally different operating modes:\n\n1. **SOQL Query-based Exports** (type=\"soql\")\n    - Scheduled or on-demand batch processing\n    - Uses SOQL queries to retrieve data\n    - Supports both REST and Bulk API\n    - Can be configured as lookups (isLookup=true)\n    - Requires the \"soql\" object with query configuration\n\n2. **Real-time Event Listeners** (type=\"distributed\")\n    - Responds to Salesforce events as they happen\n    - Uses Salesforce's streaming API and platform events\n    - Always appears as a \"Listener\" in the flow builder UI\n    - Requires the \"distributed\" object with event configuration\n\n3. **File/Blob Exports** (when export.type=\"blob\")\n    - Retrieves files stored in Salesforce\n    - Requires sObjectType and id fields\n    - Supports Attachments, ContentVersion, and Document objects\n\n**Implementation requirements**\n\nThe salesforce object has conditional requirements based on the selected type:\n\n- For SOQL exports (type=\"soql\"):\n  Required fields: type, soql.query\n  Optional fields: api, includeDeletedRecords, bulk (when api=\"bulk\")\n\n- For Distributed exports (type=\"distributed\"):\n  Required fields: type, distributed configuration\n  Optional fields: distributed.referencedFields, distributed.qualifier\n\n- For Blob exports (when export.type=\"blob\"):\n  Required fields: sObjectType, id\n","properties":{"type":{"type":"string","description":"Defines the fundamental data extraction method for Salesforce exports.\n\n    **Field behavior**\n\nThis field determines the core operating mode of the Salesforce export:\n\n- REQUIRED for all Salesforce exports\n- Controls which additional configuration objects must be provided\n- Affects how the export appears and functions in the flow builder UI\n- Cannot be changed after creation without significant reconfiguration\n\n**Available types**\n\n**Soql Query-based Export**\n```\n\"type\": \"soql\"\n```\n\n- **Behavior**: Executes SOQL queries against Salesforce on schedule or demand\n- **UI Appearance**: \"Export\" or \"Lookup\" based on isLookup value\n- **Required Config**: Must provide the \"soql\" object with a valid query\n- **Use Cases**: Batch data extraction, delta synchronization, data migration\n- **Dependencies**:\n  - Compatible with both \"rest\" and \"bulk\" API options\n  - Works with standard, delta, test, and once export types\n\n**Real-time Event Listener**\n```\n\"type\": \"distributed\"\n```\n\n- **Behavior**: Listens for real-time Salesforce events (create/update/delete)\n- **UI Appearance**: Always appears as a \"Listener\" in the flow builder\n- **Required Config**: Must provide the \"distributed\" object with event configuration\n- **Use Cases**: Real-time synchronization, event-driven integration\n- **Dependencies**:\n  - Only uses REST API (api field is ignored)\n  - Automatically configured with trigger logic in Salesforce\n  - Only compatible with standard export type (ignores delta/test/once)\n\n**Implementation considerations**\n\nThe type selection creates a fundamental difference in how data flows:\n\n- \"soql\" operates on a pull model where the integration initiates data retrieval\n- \"distributed\" operates on a push model where Salesforce events trigger the integration\n\nIMPORTANT: Choose \"soql\" for batch processing and lookups; choose \"distributed\" for\nreal-time event handling. This decision affects all other configuration aspects.\n","enum":["soql","distributed"]},"sObjectType":{"type":"string","description":"Specifies the Salesforce object type for the export operation.\n\n**Field behavior**\n\nThis field determines which Salesforce object is being exported:\n\n- **REQUIRED** when the parent export's type is \"distributed\"\n- **REQUIRED** when the parent export's type is \"blob\"\n- Optional for \"soql\" exports (can be inferred from the SOQL query)\n- Must be a valid Salesforce object API name\n\n**Use cases by export type**\n\n**Distributed Exports**\n```\n\"sObjectType\": \"Account\"\n\"sObjectType\": \"Contact\"\n\"sObjectType\": \"Opportunity\"\n\"sObjectType\": \"Custom_Object__c\"\n```\n\n- **Purpose**: Specifies the primary object type being exported from Salesforce\n- **Valid Values**: Any standard or custom Salesforce object (Account, Contact, Opportunity, Lead, Case, Custom_Object__c, etc.)\n- **API Access**: Uses the specified object's metadata and SOQL/REST APIs\n- **Use Cases**: Real-time distributed processing of Salesforce records\n- **Requirements**: Object must exist in the connected Salesforce org\n\n**Blob/File Exports**\n```\n\"sObjectType\": \"Attachment\"\n\"sObjectType\": \"ContentVersion\"\n\"sObjectType\": \"Document\"\n```\n\n- **Purpose**: Specifies which Salesforce file storage object contains the file data\n- **Valid Values**: File storage objects only (Attachment, ContentVersion, Document)\n- **API Access**: Uses file-specific APIs for data retrieval\n- **Use Cases**: Extracting files and binary data from Salesforce\n- **Requirements**: Must be used with the \"id\" field to specify the file record\n\n**Soql Exports**\n```\n\"sObjectType\": \"Account\"  // Optional - can be inferred from query\n```\n\n- **Purpose**: Optional hint about the primary object in the SOQL query\n- **Valid Values**: Any Salesforce object referenced in the query\n- **Use Cases**: Query optimization and metadata context\n\n**Implementation notes**\n\nFor distributed exports, this field is essential for:\n- Setting up proper event listeners and triggers\n- Configuring field metadata and validation\n- Enabling related object processing\n- Determining appropriate API endpoints\n\nFor blob exports, this field works with the \"id\" field to retrieve specific file records.\n\nIMPORTANT: The object specified must exist in the target Salesforce org and be accessible\nto the integration user account.\n"},"id":{"type":"string","description":"Specifies the Salesforce record ID of the file to retrieve for blob exports.\n\n**Field behavior**\n\nThis field identifies the specific file record in Salesforce:\n\n- REQUIRED when the parent export's type is \"blob\"\n- Must be a valid Salesforce ID or a handlebars expression\n- Used in conjunction with sObjectType to retrieve the file\n- Not used for regular data exports (type=\"soql\" or \"distributed\")\n\n**Implementation patterns**\n\n**Static File id**\n```\n\"id\": \"00P5f00000ZQcTZEA1\"\n```\n\n- References a specific, fixed file in Salesforce\n- Useful for retrieving standard documents or templates\n- Always retrieves the same file on each execution\n- Simple to configure but lacks flexibility\n\n**Dynamic File id (Handlebars)**\n```\n\"id\": \"{{record.Attachment_Id__c}}\"\n```\n\n- References a file ID from input data using handlebars\n- Requires the export to be used as a lookup (isLookup=true)\n- Dynamically determines which file to retrieve at runtime\n- Allows for contextual file retrieval based on previous steps\n\n**Technical details**\n\n- For ContentVersion objects, this should be the ContentVersion ID\n- For Attachment objects, this should be the Attachment ID\n- For Document objects, this should be the Document ID\n\nIMPORTANT: Salesforce IDs are 15 or 18 characters, case-sensitive for 15-character\nversions, and case-insensitive for 18-character versions. When using handlebars,\nensure the referenced field contains a valid Salesforce ID.\n"},"includeDeletedRecords":{"type":"boolean","description":"Controls whether the export retrieves records from the Salesforce Recycle Bin.\n\n**Field behavior**\n\nThis field enables access to recently deleted records:\n\n- OPTIONAL: Defaults to false if not specified\n- Only relevant for SOQL exports (type=\"soql\")\n- Ignored for distributed exports and blob exports\n- Changes the underlying API method used for queries\n\n**Implementation impact**\n\nWhen set to true:\n- Salesforce's queryAll() API method is used instead of query()\n- Records in the Recycle Bin (deleted within the past 15 days) are included\n- Each record contains an \"IsDeleted\" field to identify deleted status\n- API usage may be higher as queryAll() counts differently against limits\n\n**Use cases**\n\nThis field is particularly useful for:\n\n- Synchronizing deletion operations to target systems\n- Building data recovery/rollback mechanisms\n- Maintaining a complete audit trail including deleted records\n- Implementing soft-delete patterns across integrated systems\n\n**Technical considerations**\n\n- Records in the Recycle Bin are only available for up to 15 days\n- Hard-deleted records (emptied from Recycle Bin) are not accessible\n- The IsDeleted field should be checked to identify deleted records\n- May increase response size and processing time slightly\n\nIMPORTANT: This feature only works with SOQL exports (type=\"soql\") and is ignored\nfor distributed exports (type=\"distributed\") since those operate on events rather\nthan queries.\n","default":false},"api":{"type":"string","description":"Specifies which Salesforce API to use for retrieving data.\n\n**Field behavior**\n\nThis field controls the underlying API technology:\n\n- OPTIONAL: Defaults to \"rest\" if not specified\n- Only relevant for SOQL exports (type=\"soql\")\n- Ignored for distributed exports and blob exports\n- Determines performance characteristics and compatibility\n\n**Available APIs**\n\n**Rest api**\n```\n\"api\": \"rest\"\n```\n\n- **Performance**: Optimized for immediate response and smaller datasets\n- **Concurrency**: Higher - multiple queries can run simultaneously\n- **Data Volume**: Best for <10,000 records\n- **Use Cases**: Lookups, real-time queries, smaller datasets\n- **Special Features**: Required for lookup exports (isLookup=true)\n\n**Bulk api 2.0**\n```\n\"api\": \"bulk\"\n```\n\n- **Performance**: Optimized for large data volumes, higher throughput\n- **Concurrency**: Lower - utilizes a job queuing system\n- **Data Volume**: Best for >=10,000 records\n- **Use Cases**: Large data migrations, full dataset exports, reports\n- **Special Features**: Requires \"bulk\" object configuration for settings\n\n**Dependencies and constraints**\n\n- When isLookup=true, api must be set to \"rest\" (or left as default)\n- When api=\"bulk\", the bulk object can be configured for additional options\n- Bulk API introduces slight processing latency but handles larger volumes\n- REST API provides immediate results but may time out with very large queries\n\n**Selection guidance**\n\nChoose based on your data volume and response time needs:\n\n- For smaller datasets (<10,000 records) or lookups: use \"rest\"\n- For larger datasets or background processing: use \"bulk\"\n- When immediacy is critical: use \"rest\"\n- When throughput is critical: use \"bulk\"\n\nIMPORTANT: The Bulk API is not compatible with lookup exports (isLookup=true).\nIf your export is configured as a lookup, you must use the REST API.\n","enum":["rest","bulk"]},"bulk":{"type":"object","description":"Configuration parameters for Salesforce Bulk API 2.0 exports.\n\n**Field behavior**\n\nThis object contains settings specific to Bulk API operations:\n\n- REQUIRED when api=\"bulk\" and type=\"soql\"\n- Ignored when api=\"rest\" or type=\"distributed\"\n- Controls behavior of Salesforce Bulk API jobs\n- Provides optimization options for large data volumes\n\n**Implementation context**\n\nThe Bulk API operates differently from REST API:\n- Creates asynchronous jobs in Salesforce\n- Processes records in batches for higher throughput\n- Optimized for transferring large datasets\n- Has different governor limits and behavior\n","properties":{"maxRecords":{"type":"integer","description":"Specifies the maximum number of records to retrieve in a single Bulk API job.\n\n**Field behavior**\n\nThis field controls query result size:\n\n- OPTIONAL: Uses Salesforce's default if not specified\n- Sets the `maxRecords` parameter on Bulk API requests\n- Only applicable when api=\"bulk\" and type=\"soql\"\n- Helps prevent timeouts with complex queries or large record sizes\n\n**Technical considerations**\n\n- Different Salesforce editions have different limits\n- Values too high may cause timeouts with complex records\n- Values too low may require multiple API calls\n- Standard objects typically support higher limits than custom objects\n\n**Optimization guidance**\n\n- For simple records (few fields): Higher values improve throughput\n- For complex records (many fields): Lower values prevent timeouts\n- For standard objects: 50,000 is usually safe\n- For custom objects: 10,000-25,000 is recommended\n\nIMPORTANT: The Salesforce Bulk API 2.0 has a hard limit of 100 million records\nper job, but practical limits are typically much lower based on record complexity\nand Salesforce instance capacity.\n","minimum":10000},"purgeJobAfterExport":{"type":"boolean","description":"Controls whether Bulk API jobs are automatically deleted after completion.\n\n**Field behavior**\n\nThis field manages job cleanup:\n\n- OPTIONAL: Defaults to false if not specified\n- When true, deletes the Bulk API job after all data is retrieved\n- Only applicable when api=\"bulk\" and type=\"soql\"\n- Has no effect on the actual data retrieval or results\n\n**Implementation impact**\n\nWhen enabled (true):\n- Reduces clutter in the Salesforce Bulk Data Load Jobs UI\n- Prevents accumulation of completed jobs\n- May help stay under job retention limits\n- Makes job details unavailable for later troubleshooting\n\nWhen disabled (false):\n- Preserves job history for troubleshooting\n- Allows reviewing job details in Salesforce\n- May accumulate many jobs over time\n\n**Best practices**\n\n- For production environments: Set to true for cleanliness\n- For testing/development: Set to false for easier debugging\n- For audit-heavy environments: Set to false if job history is needed\n\nIMPORTANT: This setting only affects job metadata cleanup in Salesforce.\nIt has no impact on the actual data retrieved or the success of the export.\n"}}},"soql":{"type":"object","description":"Configuration for SOQL query-based Salesforce exports.\n\n**Field behavior**\n\nThis object contains the SOQL query settings:\n\n- REQUIRED when type=\"soql\"\n- Not used when type=\"distributed\" or for blob exports\n- Controls what data is retrieved from Salesforce\n- Works with both REST API and Bulk API methods\n\n**Implementation requirements**\n\nThe soql object must include a valid query that follows Salesforce SOQL syntax.\nThe query determines:\n- Which objects are accessed\n- Which fields are retrieved\n- What filtering conditions are applied\n- How results are sorted and limited\n","properties":{"query":{"type":"string","description":"The SOQL query that defines what data to retrieve from Salesforce.\n\n**Field behavior**\n\nThis field contains the actual SOQL statement:\n\n- REQUIRED when type=\"soql\"\n- Must follow Salesforce Object Query Language syntax\n- Passed directly to Salesforce API endpoints\n- Can include dynamic values via handlebars\n\n**Query structure elements**\n\nA complete SOQL query typically includes:\n\n**Field Selection**\n```\nSELECT Id, Name, Email, Phone, Account.Name\n```\n- List specific fields to retrieve\n- Include relationship fields using dot notation\n- Use * sparingly (only with specific sObjects that support it)\n\n**Object Selection**\n```\nFROM Contact\n```\n- Specifies the Salesforce object to query\n- Must be a valid API name (not label)\n- Case-sensitive (match Salesforce API names exactly)\n\n**Filter Conditions**\n```\nWHERE LastModifiedDate > {{lastExportDateTime}}\nAND IsActive = true\n```\n- Limits which records are returned\n- Can reference handlebars variables (e.g., for delta exports)\n- Supports standard operators (=, !=, >, <, LIKE, IN, etc.)\n\n**Relationship Queries**\n```\nSELECT Account.Id, (SELECT Id, FirstName FROM Contacts)\nFROM Account\n```\n- Retrieves parent and child records in a single query\n- Helps reduce API calls for related data\n- Supports both lookup and master-detail relationships\n\n**Implementation best practices**\n\n- Select only the fields you need (improves performance)\n- Use WHERE clauses to limit data volume\n- For delta exports, use LastModifiedDate with {{lastExportDateTime}}\n- Use ORDER BY for consistent results across multiple pages\n- Avoid SOQL functions in filters when using Bulk API\n\n**Technical limits**\n\n- Maximum query length: 20,000 characters\n- Maximum relationships traversed: 5 levels\n- Maximum subquery levels: 1 (no nested subqueries)\n- Maximum batch size varies by API (REST: 2,000, Bulk: 10,000+)\n\nIMPORTANT: When using relationship queries, child objects count against\ngovernor limits differently. For bulk processing of many parent-child records,\nconsider separate queries or the oneToMany export setting.\n","maxLength":200000}}},"distributed":{"type":"object","description":"Configuration for real-time Salesforce event-driven exports.\n\n**Field behavior**\n\nThis object defines real-time event listener settings:\n\n- REQUIRED when type=\"distributed\"\n- Not used when type=\"soql\" or for blob exports\n- Creates push-based integration triggered by Salesforce events\n- Implements real-time processing of creates, updates, and deletes\n\n**Implementation context**\n\nDistributed exports work fundamentally differently from SOQL exports:\n- No scheduling or manual execution required\n- Triggered automatically when records change in Salesforce\n- Data flows in real-time as events occur\n- Uses Salesforce's platform events and streaming API\n\n**Technical architecture**\n\nWhen configured, the system:\n1. Creates custom triggers in the connected Salesforce org\n2. Establishes event listeners for the specified objects\n3. Processes events as they occur (create/update/delete operations)\n4. Delivers the changed records to the integration flow\n","properties":{"referencedFields":{"type":"array","description":"Specifies additional fields to retrieve from related objects via relationships.\n\n**Field behavior**\n\nThis field extends the data retrieval beyond the primary object:\n\n- OPTIONAL: If omitted, only direct fields are retrieved\n- Each entry specifies a field on a related object using dot notation\n- Values are included in the exported record data\n- Only works with lookup and master-detail relationships\n\n**Implementation patterns**\n\n**Parent Object Fields**\n```\n[\"Account.Name\", \"Account.Industry\", \"Account.BillingCity\"]\n```\n- Retrieves fields from parent objects\n- Useful for including context from parent records\n- Common for child objects like Contacts, Opportunities\n\n**User/Owner Fields**\n```\n[\"Owner.Email\", \"CreatedBy.Name\", \"LastModifiedBy.Username\"]\n```\n- Retrieves fields from standard user relationship fields\n- Provides attribution information\n- Useful for auditing and notification scenarios\n\n**Custom Relationship Fields**\n```\n[\"Custom_Lookup__r.Field_Name__c\", \"Another_Relation__r.Status__c\"]\n```\n- Works with custom relationship fields\n- Uses __r suffix for the relationship name\n- Can access standard or custom fields on the related object\n\n**Technical considerations**\n\n- Maximum 10 unique referenced relationships per export\n- Each referenced field counts against Salesforce API limits\n- Fields must be accessible to the connected user\n- Performance impact increases with each additional relationship\n\nIMPORTANT: Referenced fields are retrieved via separate API calls,\nwhich can impact performance with large numbers of records or relationships.\nOnly include fields that are actually needed by your integration.\n","items":{"type":"string"}},"disabled":{"type":"boolean","description":"Controls whether this real-time event listener is active.\n\n**Field behavior**\n\nThis field enables/disables event processing:\n\n- OPTIONAL: Defaults to false if not specified\n- When true, prevents the export from processing any events\n- Preserves configuration while temporarily stopping execution\n- Can be toggled without removing the entire export\n\n**Use cases**\n\nThis field is particularly useful for:\n\n- Temporarily pausing real-time integration during maintenance\n- Testing event configuration without processing\n- Creating standby event handlers for disaster recovery\n- Controlling traffic during peak business periods\n\n**Implementation notes**\n\nWhen disabled (true):\n- Events are NOT queued - they are completely ignored\n- No data will flow through this export\n- The Salesforce triggers remain in place but are inactive\n- No impact on Salesforce performance or API limits\n\nIMPORTANT: When disabled, events that occur will NOT be processed retroactively\nwhen re-enabled. Consider using a delta export for catching up on missed changes\nafter extended disabled periods.\n"},"qualifier":{"type":"string","description":"A filter expression that determines which Salesforce events are processed.\n\n**Field behavior**\n\nThis field provides server-side filtering:\n\n- OPTIONAL: If omitted, all events for the object are processed\n- Uses Salesforce formula syntax for filtering\n- Evaluated before events are sent to the integration platform\n- Can reference any field on the triggering record\n\n**Implementation patterns**\n\n**Simple Field Comparisons**\n```\n\"Status__c = 'Approved'\"\n```\n- Processes events only when specific field values match\n- Most efficient filtering approach\n- Can use =, !=, >, <, >=, <= operators\n\n**Logical Conditions**\n```\n\"Amount > 1000 AND Status__c = 'New'\"\n```\n- Combines multiple conditions with AND, OR operators\n- Can use parentheses for complex grouping\n- Allows precise control over which events trigger the integration\n\n**Formula Functions**\n```\n\"CONTAINS(Description, 'Priority') OR ISCHANGED(Status__c)\"\n```\n- Uses Salesforce formula functions\n- ISCHANGED detects specific field modifications\n- ISNEW, ISDELETED detect record lifecycle events\n\n**Performance impact**\n\nThe qualifier is evaluated in Salesforce before sending events:\n- Reduces network traffic and processing\n- Lowers integration platform load\n- More efficient than filtering in a subsequent flow step\n- No additional API calls required\n\nIMPORTANT: The qualifier is evaluated using the Salesforce formula engine.\nUse valid Salesforce formula syntax and reference only fields that exist\non the primary object being monitored.\n"},"batchSize":{"type":"integer","description":"Controls how many records are processed together in each real-time batch.\n\n**Field behavior**\n\nThis field affects event processing efficiency:\n\n- OPTIONAL: Uses system default if not specified\n- Valid range: 4 to 200 records per batch\n- Affects how events are grouped before processing\n- Balance between latency and throughput\n\n**Performance considerations**\n\n**Smaller Batch Sizes (4-20)**\n```\n\"batchSize\": 10\n```\n- Lower latency - events processed more immediately\n- More overhead for small numbers of records\n- Better for time-sensitive operations\n- More resilient for complex record processing\n\n**Larger Batch Sizes (50-200)**\n```\n\"batchSize\": 100\n                    - Higher throughput - better efficiency for many records\n- Slight increase in processing delay\n- Better for high-volume operations\n- More efficient use of API calls and resources\n\n**Implementation guidance**\n\nChoose based on your volume and timing requirements:\n\n- For high-volume objects (many changes per minute): Use larger batches\n- For time-sensitive operations: Use smaller batches\n- For complex processing logic: Use smaller batches\n- For efficiency and throughput: Use larger batches\n\nIMPORTANT: The batch size doesn't limit how many records can be processed\nin total, only how they're grouped for processing. All events will eventually\nbe processed regardless of batch size.\n","minimum":4,"maximum":200},"skipExportFieldId":{"type":"string","description":"Specifies a boolean field that prevents integration loops in bidirectional sync.\n\n**Field behavior**\n\nThis field provides a loop prevention mechanism:\n\n- OPTIONAL: If omitted, no loop prevention is applied\n- Must reference a valid boolean/checkbox field on the object\n- Field must be updateable via the Salesforce API\n- System automatically manages the field's value\n\n**Implementation mechanism**\n\nThe loop prevention works as follows:\n\n1. When your integration updates a record in Salesforce\n2. The system temporarily sets this field to true\n3. The update triggers Salesforce's normal event system\n4. But events where this field is true are ignored\n5. The system automatically clears the field afterward\n\n**Use cases**\n\nThis field is critical for:\n\n- Bidirectional synchronization scenarios\n- Preventing infinite update loops\n- Implementing changes that flow both ways\n- Distinguishing between user changes and integration changes\n\n**Field requirements**\n\nThe field you specify must be:\n- A checkbox (Boolean) field in Salesforce\n- Created specifically for integration purposes\n- Not used by other business processes\n- Updateable by the integration user\n\nIMPORTANT: For bidirectional sync scenarios, this field is required.\nWithout it, updates from your integration would trigger events that\ncould create infinite loops between systems.\n"},"relatedLists":{"type":"array","description":"Configuration for retrieving child records related to the primary object.\n\n**Field behavior**\n\nThis field enables parent-child data synchronization:\n\n- OPTIONAL: If omitted, only the primary record is processed\n- Each array entry configures one related list/child object\n- Child records are included with their parent in the payload\n- Automatically retrieves child records when parent changes\n\n**Implementation context**\n\nThis feature allows you to:\n- Synchronize complete object hierarchies in real-time\n- Include child records when a parent record changes\n- Process parent-child data together in a single flow\n- Maintain relationships between objects across systems\n\n**Technical impact**\n\n- Each related list requires additional Salesforce API calls\n- Performance impact increases with each related list\n- Data volume can increase significantly with many children\n- Parent-child structures may require special handling in flows\n","items":{"type":"object","description":"Configuration for a single related list (child object) to include.\n\nEach object in this array defines how to retrieve one type of\nchild records related to the primary object. Multiple related lists\ncan be configured to retrieve different types of children.\n","properties":{"referencedFields":{"type":"array","description":"Specifies which fields to retrieve from the child records.\n\n**Field behavior**\n\nThis field selects child record fields:\n\n- REQUIRED for each related list configuration\n- Must contain valid API field names for the child object\n- Only listed fields will be retrieved from child records\n- Empty array will retrieve only Id field\n\n**Implementation guidance**\n\n- Include only fields needed by your integration\n- Always include key identifier fields\n- Consider relationship fields if needed\n- Balance between completeness and performance\n\nIMPORTANT: Each field increases data volume and processing time.\nOnly include fields that your integration actually needs to process.\n","items":{"type":"string"}},"parentField":{"type":"string","description":"Specifies the field on the child object that relates back to the parent.\n\n**Field behavior**\n\nThis field identifies the relationship:\n\n- REQUIRED for each related list configuration\n- Must be a lookup or master-detail field on the child object\n- References the parent object being exported\n- Used to construct the relationship query\n\n**Relationship field patterns**\n\n**Standard Relationships**\n```\n\"parentField\": \"AccountId\"\n```\n- For standard parent-child relationships\n- Field name typically ends with \"Id\"\n- References standard objects\n\n**Custom Relationships**\n```\n\"parentField\": \"Parent_Object__c\"\n```\n- For custom parent-child relationships\n- Field name typically ends with \"__c\"\n- References custom objects\n\n**Technical details**\n\nThe system uses this field to construct a query like:\n```\nSELECT [referencedFields] FROM [sObjectType]\nWHERE [parentField] = [parent record Id]\n```\n\nIMPORTANT: This must be the exact API name of the field on the child\nobject that creates the relationship to the parent, not the relationship\nname itself.\n"},"sObjectType":{"type":"string","description":"Specifies the API name of the child object to retrieve.\n\n**Field behavior**\n\nThis field identifies the child object type:\n\n- REQUIRED for each related list configuration\n- Must be a valid Salesforce API object name\n- Case-sensitive (match Salesforce naming exactly)\n- Can be standard or custom object\n\n**Object name patterns**\n\n**Standard Objects**\n```\n\"sObjectType\": \"Contact\"\n```\n- Standard Salesforce objects\n- No namespace or suffix\n- First letter capitalized\n\n**Custom Objects**\n```\n\"sObjectType\": \"Custom_Object__c\"\n```\n- Custom Salesforce objects\n- API name with \"__c\" suffix\n- Case-sensitive, including underscores\n\n**Relationship compatibility**\n\nThe sObjectType must:\n- Have a relationship field to the parent object\n- Be accessible to the connected user\n- Support standard SOQL queries\n\nIMPORTANT: Use the exact API name of the object, not its label.\nThis value is case-sensitive and must match Salesforce's naming exactly.\n"},"filter":{"type":"string","description":"Optional SOQL WHERE clause to filter which child records are included.\n\n**Field behavior**\n\nThis field adds filtering to child record retrieval:\n\n- OPTIONAL: If omitted, all related child records are included\n- Contains only the condition expression (without \"WHERE\" keyword)\n- Uses standard SOQL syntax for conditions\n- Applied in addition to the parent relationship filter\n\n**Filtering patterns**\n\n**Simple Condition**\n```\n\"filter\": \"IsActive = true\"\n```\n- Basic field comparison\n- Only active related records are included\n\n**Multiple Conditions**\n```\n\"filter\": \"Status__c = 'Open' AND Priority = 'High'\"\n```\n- Combined conditions with logical operators\n- Only records matching all conditions are included\n\n**Complex Filtering**\n```\n\"filter\": \"CreatedDate > LAST_N_DAYS:30 OR IsClosed = false\"\n```\n- Can use Salesforce date literals and functions\n- Can mix different types of conditions\n\n**Technical details**\n\nThe system appends this to the automatically generated relationship query:\n```\nSELECT [fields] FROM [sObjectType]\nWHERE [parentField] = [parent ID] AND ([filter])\n```\n\nIMPORTANT: Do not include the \"WHERE\" keyword in this field.\nOnly include the condition expression itself, as it will be combined\nwith the parent relationship condition automatically.\n"},"orderBy":{"type":"string","description":"Optional SOQL ORDER BY clause to sort the child records.\n\n**Field behavior**\n\nThis field controls child record ordering:\n\n- OPTIONAL: If omitted, order is determined by Salesforce\n- Contains only field and direction (without \"ORDER BY\" keywords)\n- Uses standard SOQL syntax for sorting\n- Applied to the child records query\n\n**Ordering patterns**\n\n**Single Field Ascending (Default)**\n```\n\"orderBy\": \"Name\"\n```\n- Sorts by a single field in ascending order\n- ASC is implied if not specified\n\n**Single Field Descending**\n```\n\"orderBy\": \"CreatedDate DESC\"\n```\n- Sorts by a single field in descending order\n- Must explicitly specify DESC\n\n**Multiple Fields**\n```\n\"orderBy\": \"Priority DESC, CreatedDate ASC\"\n```\n- Sorts by multiple fields in specified directions\n- Comma-separated list of fields with optional directions\n\n**Technical details**\n\nThe system appends this to the automatically generated relationship query:\n```\nSELECT [fields] FROM [sObjectType]\nWHERE [parentField] = [parent ID]\nORDER BY [orderBy]\n```\n\nIMPORTANT: Do not include the \"ORDER BY\" keywords in this field.\nOnly include the field names and sort directions, as they will be\nadded to the query with the proper syntax automatically.\n"}}}}}}}},"AS2-2":{"type":"object","description":"Configuration for AS2 (Applicability Statement 2) exports and listeners.\n\n**What is AS2?**\n\nApplicability Statement 2 (AS2) is a widely adopted protocol for securely and reliably transmitting\nEDI and other data types over the internet using HTTP/S, S/MIME encryption, and digital signatures.\nAS2 provides:\n\n- **Message integrity** through digital signature validation\n- **Confidentiality** via encryption with X.509 certificates\n- **Non-repudiation** via Message Disposition Notifications (MDNs)\n\n**As2 export configuration**\n\nIMPORTANT: When the _connectionId field points to a connection where the type is as2,\nthis object MUST be populated for the export to function properly. This is a required configuration\nfor all AS2 based exports, as determined by the connection associated with the export.\n\n**As2 listener functionality**\n\nAn AS2 listener is a flow step in Celigo designed to receive incoming AS2 transmissions\nand deliver them into a defined integration flow. It acts as the \"source\" of a flow—similar to\nhow a webhook listener works—except it specifically handles AS2 protocol requirements, including\ndecryption, signature verification, and MDN generation.\n\nUnlike periodic polling or scheduled exports, an AS2 listener functions in near real-time—when\na trading partner pushes an AS2 message, Celigo's listener step processes it instantly,\ngenerating an MDN in response to acknowledge receipt. This ensures low-latency, event-driven\nprocessing where each inbound AS2 transmission triggers the integration flow automatically.\n","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"Reference to a TradingPartnerConnector document.\n\n**Trading partner connector overview**\n\nA Trading Partner Connector in Celigo's integrator.io is a prebuilt, partner-specific integration\ntemplate that streamlines the setup and management of Electronic Data Interchange (EDI) transactions\nwith a designated trading partner. It encapsulates all requisite configurations:\n\n- Communication protocol (e.g., AS2, FTP/SFTP, VAN)\n- Document schemas (such as ANSI X12 or EDIFACT)\n- Mappings\n- Validation rules\n- Endpoint details\n\n**Benefits**\n\nBy referencing a Trading Partner Connector through this field, organizations:\n\n- Reduce manual setup time\n- Ensure compliance with specific partner requirements\n- Take advantage of Celigo's out-of-the-box EDI capabilities\n- Process transactions reliably and securely\n- Onboard new partners rapidly without building flows from scratch\n\nThis field is crucial for AS2 configurations as it links the export to all partner-specific\nsettings required for successful AS2 communication.\n"},"blob":{"type":"boolean","description":"- **Behavior**: Retrieves raw files without parsing them into structured data records.  Should only be used when the contents of the file will not be used in subsequent steps.\n- **UI Appearance**: \"Transfer\" flow step\n- **Required Config**: Configuration only available on AS2 and VAN exports (as2.blob = true)\n- **Use Case**: Raw file transfers for binary files or when parsing is handled downstream\n- **Important Note**: Use this when you want to handle the file as a raw blob without automatic parsing\n"}},"required":[]},"DynamoDB-2":{"type":"object","description":"Configuration object for Amazon DynamoDB data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a DynamoDB connection\nand must not be included for other connection types. It defines how data is extracted\nfrom DynamoDB tables, using query operations against NoSQL data structures.\n\n**Implementation requirements**\n\nThe DynamoDB object has the following requirements:\n\n- For basic exports:\n  Required fields: region, method, tableName, expressionAttributeNames, expressionAttributeValues, keyConditionExpression\n  Optional fields: filterExpression, projectionExpression\n\n- For Once exports (when export.type=\"once\"):\n  Additional required field: onceExportPartitionKey\n  Optional field: onceExportSortKey (for composite keys)\n","properties":{"region":{"type":"string","enum":["us-east-1","us-east-2","us-west-1","us-west-2","af-south-1","ap-east-1","ap-south-1","ap-northeast-1","ap-northeast-2","ap-northeast-3","ap-southeast-1","ap-southeast-2","ca-central-1","eu-central-1","eu-west-1","eu-west-2","eu-west-3","eu-south-1","eu-north-1","me-south-1","sa-east-1"],"description":"Specifies the AWS region where the DynamoDB table is located.\n\n**Field behavior**\n\nThis field determines where to connect to DynamoDB:\n\n- REQUIRED for all DynamoDB exports\n- Must match the region where your DynamoDB table is deployed\n- Select the same AWS region used in your database configuration\n- Ensures the integration can access your table\n","default":"us-east-1"},"method":{"type":"string","enum":["query"],"description":"Defines the DynamoDB operation method used to retrieve data.\n\n**Field behavior**\n\n- REQUIRED for all DynamoDB exports\n- Currently only supports \"query\" operations\n- Always set this value to \"query\"\n- Additional methods may be supported in future versions\n"},"tableName":{"type":"string","description":"Specifies the DynamoDB table from which to retrieve data.\n\n**Field behavior**\n\nThis field identifies the data source:\n\n- REQUIRED for all DynamoDB exports\n- Must be an exact match to an existing table name\n- Case-sensitive as per AWS naming conventions\n- Cannot be changed without recreating the export\n\n**Implementation patterns**\n\n**Standard Table Names**\n```\n\"tableName\": \"Customers\"\n```\n"},"keyConditionExpression":{"type":"string","description":"Defines the search criteria to determine which items to retrieve from DynamoDB.\n\n**Field behavior**\n\n- REQUIRED when method=\"query\"\n- Must include a condition on the partition key\n- Can optionally include conditions on the sort key\n- Uses placeholders defined in expressionAttributeNames and expressionAttributeValues\n\n**Common patterns**\n\n```\n\"#pk = :pkValue\"                                  // Partition key only\n\"#pk = :pkValue AND #sk = :skValue\"               // Exact match on partition and sort key\n\"#pk = :pkValue AND #sk BETWEEN :start AND :end\"  // Range query on sort key\n\"#pk = :pkValue AND begins_with(#sk, :prefix)\"    // Prefix match on sort key\n```\n\nPlaceholders with '#' reference attribute names, while ':' reference values.\n"},"filterExpression":{"type":"string","description":"Filters the results from a query based on non-key attributes.\n\n**Field behavior**\n\n- OPTIONAL: If omitted, all items matching the key condition are returned\n- Applied after the key condition but before returning results\n- Can reference any non-key attributes to further refine results\n- Uses placeholders defined in expressionAttributeNames and expressionAttributeValues\n\n**Examples**\n\n```\n\"#status = :active\"\n\"#status = :active AND #price > :minPrice\"\n\"contains(#tags, :tagValue)\"\n```\n\nRefer to the DynamoDB documentation for the complete list of valid operators and syntax.\n"},"projectionExpression":{"type":"array","items":{"type":"string"},"description":"Specifies which fields to return from each item in the results.\n\n**Field behavior**\n\n- OPTIONAL: If omitted, all fields are returned\n- Each array element represents a field to include\n- References attribute names defined in expressionAttributeNames\n- Reduces data transfer by returning only needed fields\n\n**Examples**\n\n```\n[\"#id\", \"#name\", \"#email\"]               // Basic fields\n[\"#id\", \"#profile.#firstName\"]           // Nested fields\n[\"#id\", \"#items[0]\", \"#items[1]\"]        // List elements\n```\n\nRefer to the DynamoDB documentation for more details on projection syntax.\n"},"expressionAttributeNames":{"type":"string","description":"Defines placeholders for attribute names used in expressions.\n\n**Field behavior**\n\n- REQUIRED when using expressions that reference attribute names\n- Must be a valid JSON string mapping placeholders to actual attribute names\n- Each placeholder must begin with a pound sign (#) followed by alphanumeric characters\n- Used in keyConditionExpression, filterExpression, and projectionExpression\n\n**Example**\n\n```\n\"{\\\"#pk\\\": \\\"customerId\\\", \\\"#status\\\": \\\"status\\\"}\"\n```\n\nThis maps the placeholder #pk to the actual attribute name \"customerId\" and #status to \"status\".\n\nRefer to the DynamoDB documentation for more details.\n"},"expressionAttributeValues":{"type":"string","description":"Defines placeholder values used in expressions for comparison.\n\n**Field behavior**\n\n- REQUIRED when using expressions that compare attribute values\n- Must be a valid JSON string mapping placeholders to actual values\n- Each placeholder must begin with a colon (:) followed by alphanumeric characters\n- Used in keyConditionExpression and filterExpression\n- Can contain static values or dynamic values with handlebars syntax\n\n**Example**\n\n```\n\"{\\\":customerId\\\": \\\"12345\\\", \\\":status\\\": \\\"ACTIVE\\\"}\"\n```\n\nThis maps the placeholder :customerId to the value \"12345\" and :status to \"ACTIVE\".\n\nRefer to the DynamoDB documentation for more details.\n"},"onceExportPartitionKey":{"type":"string","description":"Specifies the partition key attribute for identifying items in once exports.\n\n**Field behavior**\n\n- REQUIRED when export.type=\"once\"\n- Must specify the primary key that uniquely identifies each item in the table\n- Celigo uses this to track which items have been processed\n- After successful export, Celigo updates a tracking field in the database\n\nThis is needed for once exports to prevent duplicate processing of the same items\nin subsequent runs by marking them as processed.\n\nRefer to the DynamoDB documentation for more details on partition keys.\n"},"onceExportSortKey":{"type":"string","description":"Specifies the sort key attribute for identifying items in composite key tables.\n\n**Field behavior**\n\n- OPTIONAL: Only needed for tables with composite primary keys\n- Used together with onceExportPartitionKey for tables where items are identified by both keys\n- Celigo uses both keys to uniquely identify items that have been processed\n- For tables with only a partition key (simple primary key), leave this empty\n\nThis is only required if your DynamoDB table uses a composite primary key\n(partition key + sort key) to uniquely identify items.\n\nRefer to the DynamoDB documentation for more details on sort keys.\n"}}},"FTP-2":{"type":"object","description":"Configuration object for FTP/SFTP connection settings in export integrations.\n\nThis object is REQUIRED when the _connectionId field references an FTP/SFTP connection\nand must not be included for other connection types. It defines how to locate and retrieve\nfiles from FTP, FTPS, or SFTP servers.\n\nThe FTP export object has the following requirements:\n\n- Required fields: directoryPath\n- Optional fields: fileNameStartsWith, fileNameEndsWith, backupDirectoryPath, _tpConnectorId\n\n**Purpose**\n\nThis configuration specifies:\n- Which directory to retrieve files from\n- How to filter files by name patterns\n- Where to move files after retrieval (optional)\n- Any trading partner-specific connection settings\n","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"References a Trading Partner Connector for standardized B2B integrations.\n\n**Field behavior**\n\nThis field links to pre-configured trading partner settings:\n\n- OPTIONAL: If omitted, uses only the FTP connection details\n- References a Celigo Trading Partner Connector by _id\n- When specified, inherits partner-specific configurations\n"},"directoryPath":{"type":"string","description":"Directory on the FTP/SFTP server to retrieve files from.\n\n- REQUIRED for all FTP exports\n- Can be relative to login directory or absolute path\n- Supports handlebars templates (e.g., `archive/{{date 'YYYY-MM-DD'}}`)\n- Use forward slashes (/) regardless of server OS\n- Path is case-sensitive on UNIX/Linux servers\n\nIMPORTANT: The FTP user must have read permissions on this directory.\n"},"fileNameStartsWith":{"type":"string","description":"Optional prefix filter for filenames.\n\n- Filters files based on starting characters\n- Case-sensitive on most FTP servers\n- Can use static text or handlebars templates\n- Examples:\n  - `\"ORDER_\"` - matches ORDER_123.csv but not order_123.csv\n  - `\"INV_{{date 'YYYYMMDD'}}\"` - matches current date's invoices\n\nWhen used with fileNameEndsWith, files must match both criteria.\n"},"fileNameEndsWith":{"type":"string","description":"Optional suffix filter for filenames.\n\n- Commonly used to filter by file extension\n- Case-sensitive on most FTP servers\n- Examples:\n  - `\".csv\"` - retrieves only CSV files\n  - `\"_FINAL.xml\"` - retrieves only XML files with _FINAL suffix\n  - `\"_READY\"` - retrieves files with status indicator\n\nWhen used with fileNameStartsWith, files must match both criteria.\n"},"backupDirectoryPath":{"type":"string","description":"Optional directory where files are moved before deletion.\n\n- If omitted, files are deleted from the original location after successful export\n- Must be on the same FTP/SFTP server\n- Supports static paths or handlebars templates\n- Examples:\n  - `\"processed\"` - simple archive folder\n  - `\"archive/{{date 'YYYY/MM/DD'}}\"` - date-based hierarchy\n\nIMPORTANT: Celigo automatically deletes files from the source directory after\nsuccessful export. The backup directory is for users who want to maintain their\nown independent backup of exported files. Celigo also maintains its own backup\nof processed files for a set period of time.\n"}}},"JDBC-2":{"type":"object","description":"Configuration object for JDBC (Java Database Connectivity) data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a JDBC database connection\nand must not be included for other connection types. It defines how data is extracted\nfrom relational databases using SQL queries.\n\n**Jdbc export capabilities**\n- Execute custom SQL SELECT statements\n- Support for joins, aggregations, and functions\n- Flexible data retrieval from any accessible tables or views\n- Compatible with all major database systems\n**Critical:** WHAT BELONGS IN THIS OBJECT\n- `query` - **ALWAYS REQUIRED** - The SQL SELECT statement\n- `once` - **REQUIRED** when the export's Object Type is `\"once\"` (set _include_once: true)\n- **DO NOT** put `delta` inside this object - delta is handled via the query\n\n**Delta exports (type: \"delta\")**\nFor delta/incremental exports, do NOT populate a `delta` object inside `jdbc`.\nInstead, use `{{lastExportDateTime}}` or `{{currentExportDateTime}}` directly in the query:\n```json\n{\n  \"type\": \"delta\",\n  \"jdbc\": {\n    \"query\": \"SELECT * FROM customers WHERE updatedAt > {{lastExportDateTime}}\"\n  }\n}\n```\n\n**Once exports (type: \"once\")**\nFor once exports (mark records as processed), populate `jdbc.once.query`:\n```json\n{\n  \"type\": \"once\",\n  \"jdbc\": {\n    \"query\": \"SELECT * FROM customers WHERE exported = false\",\n    \"once\": {\n      \"query\": \"UPDATE customers SET exported = true WHERE id = {{record.id}}\"\n    }\n  }\n}\n```\n\n**Standard exports (type: null or not specified)**\nJust provide the query:\n```json\n{\n  \"jdbc\": {\n    \"query\": \"SELECT * FROM customers WHERE status = 'ACTIVE'\"\n  }\n}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - SQL SELECT query for retrieving data from the database.\n\nThis field contains the SQL SELECT statement that will be executed to fetch data\nfrom the database. The query can range from simple table selections to complex\njoins and aggregations.\n\nExamples:\n- Basic: `SELECT id, name, email FROM customers WHERE status = 'ACTIVE'`\n- Join: `SELECT o.id, c.name, o.amount FROM orders o JOIN customers c ON o.customer_id = c.id`\n- Aggregate: `SELECT category, COUNT(*) as count FROM orders GROUP BY category`\n- Parameterized: `SELECT * FROM orders WHERE customer_id = {{record.customer_id}}`\n\n**For delta exports (when top-level type is \"delta\")**\nInclude `{{lastExportDateTime}}` or `{{currentExportDateTime}}` in the WHERE clause:\n- `SELECT * FROM customers WHERE updatedAt > {{lastExportDateTime}}`\n- `SELECT * FROM orders WHERE modified_date >= {{lastExportDateTime}}`\n"},"once":{"type":"object","description":"**CRITICAL: REQUIRED when the export's Object Type is \"once\".**\n\nIf Object Type is \"once\", you MUST set _include_once to true (or include this object).\n\nThis object has ONLY ONE property: \"query\" (a SQL UPDATE string).\nDO NOT create any other properties like \"update\", \"table\", \"set\", \"where\", etc.\n\nCORRECT format:\n```json\n{\"query\": \"UPDATE customers SET exported=true WHERE id={{record.id}}\"}\n```\n\nWRONG format (DO NOT DO THIS):\n```json\n{\"update\": {\"table\": \"customers\", \"set\": {...}}}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - A SQL UPDATE statement string to mark records as processed.\n\nThis is a plain SQL UPDATE query string, NOT a structured object.\n\nThe query runs AFTER each record is successfully exported, setting a flag\nto indicate the record has been processed.\n\nFormat: \"UPDATE <table> SET <column>=<value> WHERE <id_column>={{record.<id_field>}}\"\n\nExample: \"UPDATE customers SET exported=true WHERE id={{record.id}}\"\n\nThe {{record.id}} placeholder is replaced with the actual record ID from each exported row.\n"}}}}},"MongoDB-2":{"type":"object","description":"Configuration object for MongoDB data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a MongoDB connection\nand must not be included for other connection types. It defines how documents are\nretrieved from MongoDB collections for processing in integrations.\n\nMongoDB exports currently support the following operational modes:\n- Retrieves documents from specified collections\n- Filters documents based on query criteria\n- Selects specific fields with projections\n- Provides NoSQL flexibility with JSON query syntax\n","properties":{"method":{"type":"string","enum":["find"],"description":"Specifies the MongoDB operation to perform when retrieving data.\n\n**Field behavior**\n\nThis field defines the query approach:\n\n- REQUIRED for all MongoDB exports\n- Currently only supports \"find\" operations\n- Determines how other parameters are interpreted\n- Corresponds to MongoDB's db.collection.find() method\n- Future versions may support additional methods\n\n**Query method types**\n\n**Find Method**\n```\n\"method\": \"find\"\n```\n\n- **Behavior**: Retrieves documents from a collection based on filter criteria\n- **MongoDB Equivalent**: db.collection.find(filter, projection)\n- **Required Parameters**: collection\n- **Optional Parameters**: filter, projection\n- **Use Cases**: Standard document retrieval, filtered queries, field selection\n\n**Technical considerations**\n\nThe method selection influences:\n- What other fields must be provided\n- How the query will be executed against MongoDB\n- What indexing strategies should be applied\n- Performance characteristics of the operation\n\nIMPORTANT: While only \"find\" is currently supported, the schema is designed\nfor future expansion to include other MongoDB operations like \"aggregate\"\nfor more complex data transformations and aggregations.\n"},"collection":{"type":"string","description":"Specifies the MongoDB collection to query for documents.\n\n**Field behavior**\n\nThis field identifies the data source:\n\n- REQUIRED for all MongoDB exports\n- Must reference a valid collection in the MongoDB database\n- Case-sensitive according to MongoDB collection naming\n- The primary container for documents to be retrieved\n"},"filter":{"type":"string","description":"Defines query criteria for selecting documents from the collection.\n\n**Field behavior**\n\nThis field narrows document selection:\n\n- OPTIONAL: If omitted, all documents in the collection are returned\n- Contains a MongoDB query document as a JSON string\n- Supports all standard MongoDB query operators\n- Provides precise control over which documents are retrieved\n\n**Query patterns**\n\n**Simple Equality Query**\n```\n\"filter\": \"{\"status\": \"active\"}\"\n```\n\n- **Behavior**: Returns only documents where status equals \"active\"\n- **MongoDB Equivalent**: db.collection.find({\"status\": \"active\"})\n- **Matching Documents**: {\"_id\": 1, \"status\": \"active\", \"name\": \"Example\"}\n- **Use Cases**: Status filtering, category selection, type filtering\n\n**Comparison Operator Query**\n```\n\"filter\": \"{\"createdDate\": {\"$gt\": \"2023-01-01T00:00:00Z\"}}\"\n```\n\n- **Behavior**: Returns documents created after January 1, 2023\n- **MongoDB Equivalent**: db.collection.find({\"createdDate\": {\"$gt\": \"2023-01-01T00:00:00Z\"}})\n- **Operators**: $eq, $gt, $gte, $lt, $lte, $ne, $in, $nin\n- **Use Cases**: Date ranges, numeric thresholds, incremental processing\n\n**Logical Operator Query**\n```\n\"filter\": \"{\"$or\": [{\"status\": \"pending\"}, {\"status\": \"processing\"}]}\"\n```\n\n- **Behavior**: Returns documents with either pending or processing status\n- **MongoDB Equivalent**: db.collection.find({\"$or\": [{\"status\": \"pending\"}, {\"status\": \"processing\"}]})\n- **Operators**: $and, $or, $nor, $not\n- **Use Cases**: Multiple conditions, alternative criteria, complex filtering\n\n**Nested Document Query**\n```\n\"filter\": \"{\"address.country\": \"USA\"}\"\n```\n\n- **Behavior**: Returns documents where the nested country field equals \"USA\"\n- **MongoDB Equivalent**: db.collection.find({\"address.country\": \"USA\"})\n- **Dot Notation**: Accesses nested document fields\n- **Use Cases**: Nested data filtering, object property matching\n\n**Handlebars Template Query**\n```\n\"filter\": \"{\"customerId\": \"{{record.customer_id}}\", \"status\": \"{{record.status}}\"}\"\n```\n\n- **Behavior**: Dynamically filters based on record field values\n- **MongoDB Equivalent**: db.collection.find({\"customerId\": \"123\", \"status\": \"active\"})\n- **Template Variables**: Values replaced at runtime with actual record data\n- **Use Cases**: Dynamic filtering, context-aware queries, relational lookups\n\n**Incremental Processing Query**\n```\n\"filter\": \"{\"lastModified\": {\"$gt\": \"{{lastRun}}\"}}\"\n```\n\n- **Behavior**: Returns only documents modified since last execution\n- **MongoDB Equivalent**: db.collection.find({\"lastModified\": {\"$gt\": \"2023-06-15T10:30:00Z\"}})\n- **System Variables**: {{lastRun}} replaced with timestamp of previous execution\n- **Use Cases**: Change data capture, delta synchronization, incremental updates\n"},"projection":{"type":"string","description":"Controls which fields are included or excluded from returned documents.\n\n**Field behavior**\n\nThis field optimizes data retrieval:\n\n- OPTIONAL: If omitted, all fields are returned\n- Contains a MongoDB projection document as a JSON string\n- Can include fields (1) or exclude fields (0), but not both (except _id)\n- Helps minimize data transfer by selecting only needed fields\n\n**Projection patterns**\n\n**Field Inclusion Projection**\n```\n\"projection\": \"{\"name\": 1, \"email\": 1, \"_id\": 0}\"\n```\n\n- **Behavior**: Returns only name and email fields, excludes _id\n- **MongoDB Equivalent**: db.collection.find({}, {\"name\": 1, \"email\": 1, \"_id\": 0})\n- **Result Format**: {\"name\": \"Example\", \"email\": \"user@example.com\"}\n- **Use Cases**: Specific field selection, minimizing payload size\n\n**Field Exclusion Projection**\n```\n\"projection\": \"{\"password\": 0, \"internal_notes\": 0}\"\n```\n\n- **Behavior**: Returns all fields except password and internal_notes\n- **MongoDB Equivalent**: db.collection.find({}, {\"password\": 0, \"internal_notes\": 0})\n- **Result Impact**: Removes sensitive or unnecessary fields\n- **Use Cases**: Security filtering, removing large fields, data protection\n\n**Nested Field Projection**\n```\n\"projection\": \"{\"profile.firstName\": 1, \"profile.lastName\": 1, \"orders\": 1, \"_id\": 0}\"\n```\n\n- **Behavior**: Returns only specific nested fields and the orders array\n- **MongoDB Equivalent**: db.collection.find({}, {\"profile.firstName\": 1, \"profile.lastName\": 1, \"orders\": 1, \"_id\": 0})\n- **Dot Notation**: Accesses specific nested document fields\n- **Use Cases**: Partial nested document selection, specific array inclusion\n\n**Technical considerations**\n\n- Maximum size: 128KB\n- Must be a valid JSON string representing a MongoDB projection\n- Cannot mix inclusion and exclusion modes (except _id field)\n- _id field is included by default unless explicitly excluded\n- Projection does not affect which documents are returned, only their fields\n\nIMPORTANT: When working with nested documents or arrays, be aware that including\na specific field path does not automatically include parent documents or arrays.\nFor example, including \"addresses.zipcode\" will only return that specific field,\nnot the entire addresses array or documents within it.\n"}}},"NetSuite-3":{"type":"object","description":"Configuration object for NetSuite data integration exports.\n\nThis object is REQUIRED when the _connectionId field references a NetSuite connection\nand must not be included for other connection types. It defines how data is extracted\nfrom NetSuite, including saved searches, RESTlets, and distributed/SuiteApp exports.\n\n**Netsuite export modes**\n\nNetSuite exports support several operating modes:\n\n1. **Saved Search Exports** - Uses NetSuite saved searches to retrieve data\n2. **RESTlet Exports** - Uses custom RESTlet scripts for data retrieval\n3. **Distributed Exports** - Uses SuiteApp for real-time or batch processing\n4. **Blob Exports** - Retrieves files from the NetSuite file cabinet and transfers them WITHOUT parsing them into records (raw binary transfer)\n5. **File Exports** - Retrieves files from the NetSuite file cabinet and PARSES them into records (CSV, XML, JSON, etc.)\n\n**Critical:** Blob vs File Export Configuration\n\nThe export `type` field at the top level determines whether file content is parsed:\n\n- **For Blob Exports (no parsing)**: Set the export's `type: \"blob\"` AND configure `netsuite.blob`\n- **For File Exports (with parsing)**: Leave the export's `type` as null/undefined AND configure `netsuite.file`\n\nDo NOT set `type: \"blob\"` when you want file content parsed into records. The \"blob\" type is specifically for raw file transfers without any parsing.\n\n**Implementation requirements**\n\n- For saved search exports: Configure the `searches` or `type` properties\n- For RESTlet exports: Configure the `restlet` property with script details\n- For distributed exports: Configure the `distributed` property\n- For blob exports (no parsing): Set export `type: \"blob\"` and configure `netsuite.blob`\n- For file exports (with parsing): Leave export `type` null and configure `netsuite.file`\n","properties":{"type":{"type":"string","enum":["search","basicSearch","metadata","selectoption","restlet","getList","getServerTime","distributed","file"],"description":"Specifies the NetSuite export operation type. This determines how data is retrieved from NetSuite.\n\n**Critical:** File exports vs Blob exports\n\n- **File exports (with parsing)**: Set netsuite.type to \"file\" and configure netsuite.file.folderInternalId\n- **Blob exports (raw transfer, no parsing)**: Leave netsuite.type BLANK/null, set the export's top-level type to \"blob\", and configure netsuite.internalId\n\nDo NOT set netsuite.type to \"file\" for blob exports. For blob exports, this property should be omitted or null.\n\n**Recommended types**\n\n- **For Lookups (isLookup: true)**:\n    - **PREFER \"restlet\"**: This allows you to use `suiteapp2.0` saved searches with dynamic inputs easily.\n    - **AVOID \"search\"**: Standard search type is often limited for dynamic lookups.\n\n**Valid values**\n- \"search\" - Use a saved search to retrieve records\n- \"basicSearch\" - Use a basic search query\n- \"metadata\" - Retrieve record metadata\n- \"selectoption\" - Retrieve select options for a field\n- \"restlet\" - Use a RESTlet for custom data retrieval\n- \"getList\" - Retrieve a list of records by internal IDs\n- \"getServerTime\" - Get the NetSuite server time\n- \"distributed\" - Use distributed/SuiteApp for real-time exports\n- \"file\" - Export files from NetSuite file cabinet WITH parsing into records\n- null/omitted - For blob exports or other export types\n\n**Implementation guidance**\n- For file exports WITH parsing: Set netsuite.type to \"file\" and configure netsuite.file.folderInternalId\n- For blob exports (no parsing): Leave netsuite.type blank, set export type to \"blob\", configure netsuite.internalId\n- For saved search exports: Set type to \"search\" and configure netsuite.searches\n- For RESTlet exports: Set type to \"restlet\" and configure netsuite.restlet\n- For distributed/real-time exports: Set type to \"distributed\" and configure netsuite.distributed\n\n**Examples**\n- \"file\" - For file cabinet exports with parsing\n- \"search\" - For saved search exports\n- null - For blob exports (raw file transfer without parsing)"},"searches":{"type":"array","description":"An array of search configurations used to query and retrieve data from NetSuite.\nEach search object defines a saved search or ad-hoc query configuration.\n\n**Structure**\nEach item in the array is an object with the following properties:\n- savedSearchId: The internal ID of a saved search in NetSuite (string)\n- recordType: The NetSuite record type being searched (string, e.g., \"customer\", \"salesorder\")\n- criteria: Array of search criteria/filters (optional)\n\n**Examples**\n```json\n[\n  {\n    \"savedSearchId\": \"10\",\n    \"recordType\": \"customer\",\n    \"criteria\": []\n  }\n]\n```\n\n**Implementation guidance**\n- Use savedSearchId to reference an existing saved search in NetSuite\n- recordType should match a valid NetSuite record type\n- criteria can be used to add additional filters to the search","items":{"type":"object","properties":{"savedSearchId":{"type":"string","description":"The internal ID of a saved search in NetSuite"},"recordType":{"type":"string","description":"The lowercase script ID of the NetSuite record type being searched.\n\nMust be the exact lowercase script ID as defined in NetSuite (e.g., \"customer\", \"salesorder\", \"invoice\", \"vendorbill\").\nThis is NOT the display name - use the script ID which is always lowercase with no spaces."},"criteria":{"type":"array","description":"Array of search criteria/filters to apply","items":{"type":"object"}}}}},"metadata":{"type":"object","description":"metadata: >\n  A collection of key-value pairs that provide additional contextual information about the NetSuite entity. This metadata can include custom attributes, tags, or any supplementary data that helps to describe, categorize, or operationally enhance the entity beyond its standard properties. It serves as an extensible mechanism to store user-defined or system-generated information that is not part of the core entity schema, enabling greater flexibility and customization in managing NetSuite data.\n  **Field behavior**\n  - Stores arbitrary additional information related to the NetSuite entity, enhancing its descriptive or operational context.\n  - Can include custom fields defined by users, system-generated tags, flags, timestamps, or nested structured data.\n  - Typically represented as a dictionary or map with string keys and values that may be strings, numbers, booleans, arrays, or nested objects.\n  - Metadata entries are optional and do not affect the core entity behavior unless explicitly integrated with business logic.\n  - Supports dynamic addition, update, and removal of metadata entries without impacting the primary entity schema.\n  **Implementation guidance**\n  - Ensure all metadata keys are unique within the collection to prevent accidental overwrites.\n  - Support flexible and heterogeneous value types, including primitive types and nested structures, to accommodate diverse metadata needs.\n  - Validate keys and values against naming conventions, length restrictions, and allowed character sets to maintain consistency and prevent errors.\n  - Implement efficient mechanisms for CRUD (Create, Read, Update, Delete) operations on metadata entries to facilitate easy management.\n  - Consider indexing frequently queried metadata keys for performance optimization.\n  - Provide clear documentation or schema definitions for any standardized or commonly used metadata keys.\n  **Examples**\n  - {\"department\": \"Sales\", \"region\": \"EMEA\", \"priority\": \"high\"}\n  - {\"customField1\": \"value1\", \"customFlag\": true}\n  - {\"tags\": [\"urgent\", \"review\"], \"lastUpdatedBy\": \"user123\"}\n  - {\"approvalStatus\": \"pending\", \"reviewCount\": 3, \"metadataVersion\": 2}\n  - {\"nestedInfo\": {\"createdBy\": \"admin\", \"createdAt\": \"2024-05-01T12:00:00Z\"}}\n  **Important notes**\n  - Metadata is optional and should not interfere with the core functionality or validation of the NetSuite entity.\n  - Modifications to metadata typically do not trigger business workflows or logic unless explicitly configured to do so."},"selectoption":{"type":"object","description":"selectoption: >\n  Represents a selectable option within a NetSuite field, typically used in dropdown menus, radio buttons, or other selection controls. Each selectoption consists of a user-friendly label and an associated value that uniquely identifies the option internally. This structure enables consistent data entry, filtering, and categorization within NetSuite forms and records.\n  **Field behavior**\n  - Defines a single, discrete choice available to users in selection interfaces such as dropdowns, radio buttons, or multi-select lists.\n  - Can be part of a collection of options presented to the user for making a selection.\n  - Includes both a display label (visible to users) and a corresponding value (used internally or in API interactions).\n  - Supports filtering, categorization, and conditional logic based on the selected option.\n  - May be dynamically generated or statically defined depending on the field configuration.\n  **Implementation guidance**\n  - Assign a unique and stable value to each selectoption to prevent ambiguity and maintain data integrity.\n  - Use clear, concise, and user-friendly labels that accurately describe the option’s meaning.\n  - Validate option values against expected data types and formats to ensure compatibility with backend processing.\n  - Implement localization strategies for labels to support multiple languages without altering the underlying values.\n  - Consistently apply selectoption structures across all fields requiring predefined choices to standardize user experience.\n  - Consider accessibility best practices when designing labels and selection controls.\n  **Examples**\n  - { label: \"Active\", value: \"1\" }\n  - { label: \"Inactive\", value: \"2\" }\n  - { label: \"Pending Approval\", value: \"3\" }\n  - { label: \"High Priority\", value: \"high\" }\n  - { label: \"Low Priority\", value: \"low\" }\n  **Important notes**\n  - The label is intended for display purposes and may be localized; the value is the definitive identifier used in data processing and API calls.\n  - Values should remain consistent over time to avoid breaking integrations or corrupting data.\n  - When supporting multiple languages, labels should be translated appropriately while keeping values unchanged.\n  - Changes to selectoption values or labels should be managed carefully to prevent unintended side effects.\n  - Selectoption entries may be influenced by the context of the parent record, user roles, or permissions.\n  **Dependency chain**\n  - Utilized within field definitions that support selection inputs (e"},"customFieldMetadata":{"type":"object","description":"customFieldMetadata: Metadata information related to custom fields defined within the NetSuite environment, providing comprehensive details about each custom field's configuration, behavior, and constraints to facilitate accurate data handling and UI generation.\n**Field behavior**\n- Contains detailed metadata about custom fields, including their definitions, types, configurations, and constraints.\n- Provides contextual information necessary for understanding, validating, and manipulating custom fields programmatically.\n- May include attributes such as field ID, label, data type, default values, validation rules, display settings, sourcing information, and field dependencies.\n- Used to dynamically interpret or generate UI elements, data validation logic, or data structures based on custom field configurations.\n- Reflects the current state of custom fields as defined in the NetSuite account, enabling synchronization between the API consumer and the NetSuite environment.\n**Implementation guidance**\n- Ensure that the metadata accurately reflects the current state of custom fields in the NetSuite account by synchronizing regularly or on configuration changes.\n- Update the metadata whenever custom fields are added, modified, or removed to maintain consistency and prevent data integrity issues.\n- Use this metadata to validate input data against custom field constraints (e.g., data type, required status, allowed values) before processing or submission.\n- Consider caching metadata for performance optimization but implement mechanisms to refresh it periodically or on-demand to capture updates.\n- Handle cases where customFieldMetadata might be null, incomplete, or partially loaded gracefully, including fallback logic or error handling.\n- Respect user permissions and access controls when retrieving or exposing custom field metadata to ensure compliance with security policies.\n**Examples**\n- A custom field metadata object describing a custom checkbox field with ID \"custfield_123\", label \"Approved\", default value false, and display type \"inline\".\n- Metadata for a custom list/record field specifying the list of valid options, their internal IDs, and whether multiple selections are allowed.\n- Information about a custom date field including its date format, minimum and maximum allowed dates, and any validation rules applied.\n- Metadata describing a custom currency field with precision settings and default currency.\n**Important notes**\n- The structure and content of customFieldMetadata may vary depending on the NetSuite configuration, customizations, and API version.\n- Access to custom field metadata may require appropriate permissions within the NetSuite environment; unauthorized access may result in incomplete or no metadata.\n- Changes to custom fields in NetSuite (such as renaming, deleting, or changing data types) can impact the metadata and"},"skipGrouping":{"type":"boolean","description":"skipGrouping: Indicates whether to bypass the grouping of related records or transactions during processing, allowing each item to be handled individually rather than aggregated into groups.\n\n**Field behavior**\n- When set to true, the system processes each record or transaction independently, without combining them into groups based on shared attributes.\n- When set to false or omitted, related records or transactions are aggregated according to predefined grouping criteria (e.g., by customer, date, or transaction type) before processing.\n- Influences how data is structured, summarized, and reported in outputs or passed to downstream systems.\n- Affects the level of detail and granularity available in the processed data.\n\n**Implementation guidance**\n- Utilize this flag to control processing granularity, especially when detailed, record-level analysis or reporting is required.\n- Confirm that downstream systems, reports, or integrations can accommodate ungrouped data if skipGrouping is enabled.\n- Assess the potential impact on system performance and data volume, as disabling grouping may significantly increase the number of processed items.\n- Consider the use case carefully: grouping is generally preferred for summary reports, while skipping grouping suits detailed audits or troubleshooting.\n\n**Examples**\n- skipGrouping: true — Processes each transaction separately, providing detailed, unaggregated data.\n- skipGrouping: false — Groups transactions by customer or date, producing summarized results.\n- skipGrouping omitted — Defaults to grouping enabled, aggregating related records.\n\n**Important notes**\n- Enabling skipGrouping can lead to increased processing time, higher memory usage, and larger output datasets.\n- Some reports, dashboards, or integrations may require grouped data; verify compatibility before enabling this option.\n- The default behavior is typically grouping enabled (skipGrouping = false) unless explicitly overridden.\n- Changes to this setting may affect data consistency and comparability with previously generated reports.\n\n**Dependency chain**\n- Often depends on other properties that define grouping keys or criteria (e.g., groupBy fields).\n- May interact with filtering, sorting, or pagination settings within the processing pipeline.\n- Could influence or be influenced by aggregation functions or summary calculations applied downstream.\n\n**Technical details**\n- Data type: Boolean.\n- Default value: false (grouping enabled).\n- Implemented as a conditional flag checked during the data aggregation phase.\n- When true, bypasses aggregation logic and processes each record individually.\n- Typically integrated into the processing workflow to toggle between grouped and ungrouped data handling."},"statsOnly":{"type":"boolean","description":"statsOnly indicates whether the API response should include only aggregated statistical summary data without any detailed individual records. This property is used to optimize response size and improve performance when detailed data is unnecessary.\n\n**Field behavior**\n- When set to true, the API returns only summary statistics such as counts, averages, sums, or other aggregate metrics.\n- When set to false or omitted, the response includes both detailed data records and the associated statistical summaries.\n- Helps reduce network bandwidth and processing time by excluding verbose record-level data.\n- Primarily intended for use cases like dashboards, reports, or monitoring tools where only high-level metrics are required.\n\n**Implementation guidance**\n- Default the value to false to ensure full data retrieval unless explicitly requesting summary-only data.\n- Validate that the input is a boolean to prevent unexpected API behavior.\n- Use this flag selectively in scenarios where detailed records are not needed to avoid loss of critical information.\n- Ensure the API endpoint supports this flag before usage, as some endpoints may not implement statsOnly functionality.\n- Adjust client-side logic to handle different response structures depending on the flag’s value.\n\n**Examples**\n- statsOnly: true — returns only aggregated statistics such as total counts, averages, or sums without any detailed entries.\n- statsOnly: false — returns full detailed data records along with statistical summaries.\n- statsOnly omitted — defaults to false, returning detailed data and statistics.\n\n**Important notes**\n- Enabling statsOnly disables access to individual record details, which may limit in-depth data analysis.\n- The response schema changes significantly when statsOnly is true; clients must handle these differences gracefully.\n- Some API endpoints may not support this property; verify compatibility in the API documentation.\n- Pagination parameters may be ignored or behave differently when statsOnly is enabled, since detailed records are excluded.\n\n**Dependency chain**\n- May interact with filtering, sorting, or date range parameters that influence the statistical data returned.\n- Can affect pagination logic because detailed records are omitted when statsOnly is true.\n- Dependent on the API endpoint’s support for summary-only responses.\n\n**Technical details**\n- Data type: Boolean.\n- Default value: false.\n- Typically implemented as a query parameter or part of the request payload depending on API design.\n- Alters the response payload structure by excluding detailed record arrays and including only aggregated metrics.\n- Helps optimize API performance and reduce response payload size in scenarios where detailed data is unnecessary."},"internalId":{"type":"string","description":"The internal ID of a specific file in the NetSuite file cabinet to export.\n\n**Critical:** Required for blob exports\n\nThis property is REQUIRED when the export type is \"blob\". For blob exports, you must specify the internalId of the file to export from the NetSuite file cabinet.\n\n**Field behavior**\n- Identifies a specific file in the NetSuite file cabinet by its internal ID\n- Required for blob exports (raw binary file transfers without parsing)\n- The file at this internal ID will be exported as-is without parsing\n\n**Implementation guidance**\n- For blob exports: Set netsuite.type to \"blob\" (on the export, not netsuite) and provide netsuite.internalId\n- Obtain the file's internalId from NetSuite's file cabinet or via API\n- Validate the internalId corresponds to an existing file before export\n\n**Examples**\n- \"12345\" - Internal ID of a specific file\n- \"67890\" - Another file internal ID\n\n**Important notes**\n- This is different from netsuite.file.folderInternalId which specifies a folder for file exports with parsing\n- For blob exports: Use netsuite.internalId (file ID)\n- For file exports with parsing: Use netsuite.file.folderInternalId (folder ID)"},"blob":{"type":"object","properties":{"purgeFileAfterExport":{"type":"boolean","description":"purgeFileAfterExport: Whether to delete the file from the system after it has been successfully exported. This property controls the automatic removal of the source file post-export to help manage storage and maintain system cleanliness.\n\n**Field behavior**\n- Determines if the exported file should be removed from the source storage immediately after a successful export operation.\n- When set to `true`, the system deletes the file as soon as the export completes without errors.\n- When set to `false` or omitted, the file remains intact in its original location after export.\n- Facilitates automated cleanup of files to prevent unnecessary storage consumption.\n- Does not affect the export process itself; deletion occurs only after confirming export success.\n\n**Implementation guidance**\n- Confirm that the export operation has fully completed and succeeded before initiating file deletion to avoid data loss.\n- Verify that the executing user or system has sufficient permissions to delete files from the source location.\n- Assess downstream workflows or processes that might require access to the file after export before enabling purging.\n- Implement logging or notification mechanisms to record when files are purged for audit trails and troubleshooting.\n- Consider integrating with retention policies or backup systems to prevent accidental loss of important data.\n\n**Examples**\n- `true` — The file will be deleted immediately after a successful export.\n- `false` — The file will remain in the source location after export.\n- Omitted or `null` — Defaults to `false`, meaning the file is retained post-export.\n\n**Important notes**\n- File deletion is permanent and cannot be undone; ensure that the file is no longer needed before enabling this option.\n- Use caution in multi-user or multi-process environments where files may be shared or required beyond the export operation.\n- Immediate purging may interfere with backup, archival, or compliance requirements if files are deleted too soon.\n- Consider implementing safeguards or confirmation steps if enabling automatic purging in production environments.\n\n**Dependency chain**\n- Relies on the successful completion of the export operation to trigger file deletion.\n- Dependent on file system permissions and access controls to allow deletion.\n- May be affected by other system settings related to file retention, archival, or cleanup policies.\n- Could interact with error handling mechanisms to prevent deletion if export fails or is incomplete.\n\n**Technical details**\n- Typically represented as a boolean value (`true` or `false`).\n- Default behavior is to retain files unless explicitly set to `true`.\n- Deletion should be performed using secure"}},"description":"blob: Configuration for retrieving raw binary files from NetSuite file cabinet WITHOUT parsing them into records. Use this for binary file transfers (images, PDFs, executables) where the file content should be transferred as-is.\n\n**Critical:** Blob export configuration\n\nFor blob exports, configure:\n1. Set the export's top-level `type` to \"blob\"\n2. Set `netsuite.internalId` to the file's internal ID\n3. Leave `netsuite.type` blank/null (do NOT set it to \"file\")\n4. Optionally configure `netsuite.blob.purgeFileAfterExport`\n\n**When to use blob vs file**\n- Blob exports: Raw binary transfer WITHOUT parsing - leave netsuite.type blank\n- File exports: Parse file contents into records - set netsuite.type to \"file\"\n\nDo NOT use blob configuration when you want file content parsed into data records.\n\n**Field behavior**\n- Stores raw binary data including files, images, audio, video, or any non-textual content.\n- Supports download operations for binary content from the NetSuite file cabinet.\n- File content is transferred as-is without any parsing or transformation.\n- May be immutable or mutable depending on the specific NetSuite entity and operation.\n- Requires careful handling to maintain data integrity during transmission and storage.\n\n**Implementation guidance**\n- Always encode binary data (e.g., using base64) when transmitting over text-based protocols such as JSON or XML to ensure data integrity.\n- Validate the size of the blob against NetSuite API limits and storage constraints to prevent errors or truncation.\n- Implement secure handling practices, including encryption in transit and at rest, to protect sensitive binary data.\n- Use appropriate MIME/content-type headers when uploading or downloading blobs to correctly identify the data format.\n- Consider chunked uploads/downloads or streaming for large blobs to optimize performance and resource usage.\n- Ensure consistent encoding and decoding mechanisms between client and server to avoid data corruption.\n\n**Examples**\n- A base64-encoded PDF document attached to a NetSuite customer record.\n- An image file (PNG or JPEG) stored as a blob for product catalog entries.\n- A binary export of transaction data in a proprietary format used for integration with external systems.\n- Audio or video files associated with marketing campaigns or training materials.\n- Encrypted binary blobs containing sensitive configuration or credential data.\n\n**Important notes**\n- Blob size may be limited by NetSuite API constraints or underlying storage capabilities; exceeding these limits can cause failures.\n- Encoding and decoding must be consistent and correctly implemented to prevent data corruption or loss.\n- Large blobs should be handled using chunked or streamed transfers to avoid memory issues and improve reliability.\n- Security is paramount; blobs may contain sensitive information requiring encryption and strict access controls.\n- Access to blob data typically requires proper authentication and authorization aligned with NetSuite’s security model.\n\n**Dependency chain**\n- Dependent on authentication and authorization mechanisms"},"restlet":{"type":"object","properties":{"recordType":{"type":"string","description":"recordType specifies the type of NetSuite record that the RESTlet will interact with. This property determines the schema, validation rules, and operations applicable to the record within the NetSuite environment, directly influencing how data is processed and managed by the RESTlet.\n\n**Field behavior**\n- Defines the specific NetSuite record type (e.g., customer, salesOrder, invoice) targeted by the RESTlet.\n- Influences the structure and format of data payloads sent to and received from the RESTlet.\n- Controls validation rules, mandatory fields, and available operations based on the selected record type.\n- Affects permissions and access controls enforced during RESTlet execution, ensuring compliance with NetSuite security settings.\n- Determines the applicable business logic and workflows triggered by the RESTlet for the specified record type.\n\n**Implementation guidance**\n- Use the exact internal ID or script ID of the NetSuite record type as recognized by the NetSuite system to ensure accurate targeting.\n- Validate the recordType value against the list of supported NetSuite record types to prevent runtime errors and ensure compatibility.\n- Confirm that the RESTlet script has the necessary permissions and roles assigned to access and manipulate the specified record type.\n- When handling multiple record types dynamically, implement conditional logic to accommodate differences in data structure and processing requirements.\n- For custom record types, always use the script ID format (e.g., \"customrecord_myCustomRecord\") to avoid ambiguity.\n- Test RESTlet behavior thoroughly after changing the recordType to verify correct handling of data and operations.\n\n**Examples**\n- \"customer\"\n- \"salesOrder\"\n- \"invoice\"\n- \"employee\"\n- \"customrecord_myCustomRecord\"\n- \"vendor\"\n- \"purchaseOrder\"\n\n**Important notes**\n- The recordType must correspond to a valid and supported NetSuite record type; invalid values will cause API calls to fail.\n- Custom record types require referencing by their script IDs, which typically start with \"customrecord_\".\n- Modifying the recordType may necessitate updates to the RESTlet’s codebase to handle different data schemas and business logic.\n- Permissions and role restrictions in NetSuite can limit access to certain record types, impacting RESTlet functionality.\n- Consistency in recordType usage is critical for maintaining data integrity and predictable RESTlet behavior.\n\n**Dependency chain**\n- Depends on the NetSuite environment’s available record types and their configurations.\n- Influences the RESTlet’s data validation, processing logic, and response formatting.\n-"},"searchId":{"type":"string","description":"searchId: The unique identifier for a saved search in NetSuite, used to specify which saved search the RESTlet should execute. This ID corresponds to the internal ID assigned to saved searches within the NetSuite system. It enables the RESTlet to run predefined queries and retrieve data based on the saved search’s criteria and configuration.\n\n**Field behavior**\n- Specifies the exact saved search to be executed by the RESTlet.\n- Must correspond to a valid and existing saved search internal ID within the NetSuite account.\n- Determines the dataset and filters applied when retrieving search results.\n- Typically required when invoking the RESTlet to perform search operations.\n- Influences the structure and content of the response based on the saved search definition.\n\n**Implementation guidance**\n- Verify that the searchId matches an existing saved search internal ID in the target NetSuite environment.\n- Validate the searchId format and existence before making the RESTlet call to prevent runtime errors.\n- Use the internal ID as a string or numeric value consistent with NetSuite’s conventions.\n- Implement error handling for scenarios where the searchId is invalid, missing, or inaccessible due to permission restrictions.\n- Ensure the integration role or user has appropriate permissions to access and execute the saved search.\n- Consider caching or documenting frequently used searchIds to improve maintainability.\n\n**Examples**\n- \"1234\" — a numeric internal ID representing a specific saved search.\n- \"5678\" — another valid saved search internal ID.\n- \"1001\" — an example of a saved search ID used to retrieve customer records.\n- \"2002\" — a saved search ID configured to return transaction data.\n\n**Important notes**\n- The searchId must be accessible by the user or integration role making the RESTlet call; otherwise, access will be denied.\n- Providing an incorrect or non-existent searchId will result in errors or empty search results.\n- Permissions and sharing settings on the saved search directly affect the data returned by the RESTlet.\n- The saved search must be properly configured with the desired filters, columns, and criteria to ensure meaningful results.\n- Changes to the saved search (e.g., modifying filters or columns) will impact the RESTlet output without changing the searchId.\n\n**Dependency chain**\n- Depends on the existence of a saved search configured in the NetSuite account.\n- Requires appropriate user or integration role permissions to access the saved search.\n- Relies on the saved search’s configuration (filters, columns, criteria) to"},"useSS2Restlets":{"type":"boolean","description":"useSS2Restlets: >\n  Specifies whether to use SuiteScript 2.0 RESTlets for API interactions instead of SuiteScript 1.0 RESTlets. This setting controls the version of RESTlets invoked during API communication with NetSuite, impacting compatibility, performance, and available features.\n  **Field behavior**\n  - Determines the RESTlet version used for all API interactions within the NetSuite integration.\n  - When set to `true`, the system exclusively uses SuiteScript 2.0 RESTlets.\n  - When set to `false` or omitted, SuiteScript 1.0 RESTlets are used by default.\n  - Influences the structure, capabilities, and response formats of API calls.\n  **Implementation guidance**\n  - Enable this flag to take advantage of SuiteScript 2.0’s improved modularity, asynchronous capabilities, and modern JavaScript syntax.\n  - Verify that SuiteScript 2.0 RESTlets are properly deployed, configured, and accessible in the target NetSuite environment before enabling.\n  - Conduct comprehensive testing to ensure existing integrations and workflows remain functional when switching from SuiteScript 1.0 to 2.0 RESTlets.\n  - Coordinate with NetSuite administrators and developers to update or rewrite RESTlets if necessary.\n  **Examples**\n  - `true` — API calls will utilize SuiteScript 2.0 RESTlets, enabling modern scripting features.\n  - `false` — API calls will continue using legacy SuiteScript 1.0 RESTlets for backward compatibility.\n  **Important notes**\n  - SuiteScript 2.0 RESTlets support modular script architecture and ES6+ JavaScript features, improving maintainability and performance.\n  - Legacy RESTlets written in SuiteScript 1.0 may not be compatible with SuiteScript 2.0; migration or parallel support might be required.\n  - Switching RESTlet versions can change API response formats and behaviors, potentially impacting downstream systems.\n  - Ensure proper version control and rollback plans are in place when changing this setting.\n  **Dependency chain**\n  - Depends on the deployment and availability of SuiteScript 2.0 RESTlets within the NetSuite account.\n  - Requires that the API client and integration logic support the RESTlet version selected.\n  - May depend on other configuration settings related to authentication and script permissions.\n  **Technical details**\n  - SuiteScript 2.0 RESTlets use the AMD module format and support"},"restletVersion":{"type":"object","properties":{"type":{"type":"string","description":"The type property specifies the version type of the NetSuite Restlet being used. It determines the specific version or variant of the Restlet API that the integration will interact with, ensuring compatibility and correct functionality. This property is essential for correctly routing requests, handling responses, and maintaining alignment with the expected API contract for the chosen Restlet version.\n\n**Field behavior**\n- Defines the version category or variant of the NetSuite Restlet API.\n- Influences request formatting, response parsing, and available features.\n- Determines which API endpoints and methods are accessible.\n- May impact authentication mechanisms and data serialization formats.\n- Ensures that the integration communicates with the correct Restlet version to prevent incompatibility issues.\n\n**Implementation guidance**\n- Use only predefined and officially supported Restlet version types provided by NetSuite.\n- Validate the type value against the current list of supported Restlet versions before deployment.\n- Update the type property when upgrading to a newer Restlet version or switching to a different variant.\n- Include the type property within the restletVersion object to explicitly specify the API version.\n- Coordinate changes to this property with client applications and integration workflows to maintain compatibility.\n- Monitor NetSuite release notes and documentation for any changes or deprecations related to Restlet versions.\n\n**Examples**\n- \"1.0\" — specifying the stable Restlet API version 1.0.\n- \"2.0\" — specifying the newer Restlet API version 2.0 with enhanced features.\n- \"beta\" — indicating a beta or experimental Restlet version for testing purposes.\n- \"custom\" — representing a custom or extended Restlet version tailored for specific use cases.\n\n**Important notes**\n- Providing an incorrect or unsupported type value can cause API calls to fail or behave unpredictably.\n- The type must be consistent with the NetSuite environment configuration and deployment settings.\n- Changing the type may necessitate updates in client-side code, authentication flows, and data handling logic.\n- Always consult the latest official NetSuite documentation to verify supported Restlet versions and their characteristics.\n- The type property is critical for maintaining long-term integration stability and compatibility.\n\n**Dependency chain**\n- Depends on the restlet object to define the overall Restlet configuration.\n- Influences other properties related to authentication, endpoint URLs, and data formats within the restletVersion context.\n- May affect downstream processing components that rely on version-specific behaviors.\n\n**Technical details**\n- Typically represented as a string value"},"enum":{"type":"array","items":{"type":"object"},"description":"A list of predefined string values that the `restletVersion` property can accept, representing the supported versions of the NetSuite RESTlet API. This enumeration restricts the input to specific allowed versions to ensure compatibility, prevent invalid version assignments, and facilitate validation and user interface enhancements.\n\n**Field behavior**\n- Defines the complete set of valid version identifiers for the `restletVersion` property.\n- Ensures that only officially supported RESTlet API versions can be selected or submitted.\n- Enables validation mechanisms to reject unsupported or malformed version inputs.\n- Supports auto-completion and dropdown selections in user interfaces and API clients.\n- Helps maintain consistency and compatibility across different API integrations and deployments.\n\n**Implementation guidance**\n- Populate the enum with all currently supported NetSuite RESTlet API versions as defined by official documentation.\n- Regularly update the enum values to reflect newly released versions or deprecated ones.\n- Use clear and consistent string formats that match the official versioning scheme (e.g., semantic versions like \"1.0\", \"2.0\" or date-based versions like \"2023.1\").\n- Implement strict validation logic to reject any input not included in the enum.\n- Consider backward compatibility when adding or removing enum values to avoid breaking existing integrations.\n\n**Examples**\n- [\"1.0\", \"2.0\", \"2.1\"]\n- [\"2023.1\", \"2023.2\", \"2024.1\"]\n- [\"v1\", \"v2\", \"v3\"] (if applicable based on versioning scheme)\n\n**Important notes**\n- Enum values must strictly align with the official NetSuite RESTlet API versioning scheme to ensure correctness.\n- Using a version value outside this enum should trigger a validation error and prevent API calls or configuration saves.\n- The enum acts as a safeguard against runtime errors caused by unsupported or invalid version usage.\n- Changes to this enum should be communicated clearly to all API consumers to manage version compatibility.\n\n**Dependency chain**\n- This enum is directly associated with and constrains the `restletVersion` property.\n- Updates to supported RESTlet API versions necessitate corresponding updates to this enum.\n- Validation logic and UI components rely on this enum to enforce version correctness.\n- Downstream processes that depend on the `restletVersion` value are indirectly dependent on this enum’s accuracy.\n\n**Technical details**\n- Implemented as a string enumeration type within the API schema.\n- Used by validation middleware or schema"},"lowercase":{"type":"object","description":"lowercase: Specifies whether the restlet version string should be converted to lowercase characters to ensure consistent formatting.\n\n**Field behavior**\n- Determines if the restlet version identifier is transformed entirely to lowercase characters.\n- When set to true, the version string is converted to lowercase before any further processing or output.\n- When set to false or omitted, the version string retains its original casing as provided.\n- Affects only the textual representation of the version string, not its underlying value or meaning.\n\n**Implementation guidance**\n- Use this property to enforce uniform casing for version strings, particularly when interacting with case-sensitive systems or APIs.\n- Validate that the value assigned is a boolean (true or false).\n- Define a default behavior (commonly false) when the property is not explicitly set.\n- Apply the lowercase transformation early in the processing pipeline to maintain consistency.\n- Ensure that downstream components respect the transformed casing if this property is enabled.\n\n**Examples**\n- `true` — The version string \"V1.0\" is converted and output as \"v1.0\".\n- `false` — The version string \"V1.0\" remains unchanged as \"V1.0\".\n- Property omitted — The version string casing remains as originally provided.\n\n**Important notes**\n- Altering the casing of the version string may impact integrations with external systems that are case-sensitive.\n- Confirm compatibility with all consumers of the version string before enabling this property.\n- This property does not modify the semantic meaning or version number, only its textual case.\n- Should be used consistently across all instances where the version string is handled to avoid discrepancies.\n\n**Dependency chain**\n- Requires a valid restlet version string to perform the lowercase transformation.\n- May be used in conjunction with other formatting or validation properties related to the restlet version.\n- The effect of this property should be considered when performing version comparisons or logging.\n\n**Technical details**\n- Implemented as a boolean flag controlling whether to apply a lowercase function to the version string.\n- The transformation typically involves invoking a standard string lowercase method/function.\n- Should be executed before any version string comparisons, storage, or output operations.\n- Does not affect the internal representation of the version beyond its string casing."}},"description":"restletVersion specifies the version of the NetSuite Restlet script to be used for the API call. This property determines which version of the Restlet script is invoked, ensuring compatibility and proper execution of the request. It allows precise control over which iteration of the Restlet logic is executed, facilitating version management and smooth transitions between script updates.\n\n**Field behavior**\n- Defines the specific version of the Restlet script to target for the API request.\n- Influences the behavior, output, and compatibility of the API response based on the selected script version.\n- Enables management of multiple Restlet script versions within the same NetSuite environment.\n- Ensures that the API call executes the intended logic corresponding to the specified version.\n- Helps prevent conflicts or errors arising from script changes or updates.\n\n**Implementation guidance**\n- Set this property to exactly match the version identifier of the deployed Restlet script in NetSuite.\n- Confirm that the specified version is properly deployed and active in the NetSuite account before use.\n- Adopt a clear and consistent versioning scheme (e.g., semantic versioning, date-based, or custom tags) to avoid ambiguity.\n- Update this property whenever switching to a newer or different Restlet script version to reflect the intended logic.\n- Validate the version string format to prevent malformed or unsupported values.\n- Coordinate version updates with deployment and testing processes to ensure smooth transitions.\n\n**Examples**\n- \"1.0\"\n- \"2.1\"\n- \"2023.1\"\n- \"v3\"\n- \"release-2024-06\"\n\n**Important notes**\n- Specifying an incorrect or non-existent version will cause the API call to fail or produce unexpected results.\n- Proper versioning supports backward compatibility and controlled feature rollouts.\n- Always verify the Restlet script version in the NetSuite environment before making API calls.\n- Version mismatches can lead to errors, data inconsistencies, or unsupported operations.\n- This property is critical for environments where multiple Restlet versions coexist.\n\n**Dependency chain**\n- Depends on the Restlet scripts deployed and versioned within the NetSuite environment.\n- Related to the authentication and authorization context of the API call, as permissions may vary by script version.\n- Works in conjunction with other NetSuite API properties such as scriptId and deploymentId to fully identify the target Restlet.\n- May interact with environment-specific configurations or feature flags tied to particular versions.\n\n**Technical details**\n- Typically represented as a string"},"criteria":{"type":"object","properties":{"type":{"type":"array","items":{"type":"object"},"description":"The type property specifies the category or classification of the criteria used within the NetSuite RESTlet API. It defines the nature or kind of the criteria being applied to filter or query data, enabling precise targeting of records based on their domain or entity type.\n\n**Field behavior**\n- Determines the specific category or classification of the criteria.\n- Influences how the criteria is interpreted and processed by the API.\n- Helps in filtering or querying data based on the defined type.\n- Typically expects a predefined set of values corresponding to valid criteria types.\n- Acts as a key discriminator that guides the API in applying the correct schema and validation rules for the criteria.\n- May affect the available fields and operators applicable within the criteria.\n\n**Implementation guidance**\n- Validate the input against the allowed set of criteria types to ensure correctness and prevent errors.\n- Use consistent, descriptive, and case-sensitive naming conventions for the type values as defined by NetSuite.\n- Ensure that the specified type aligns with the corresponding criteria structure and expected data fields.\n- Document all possible values and their meanings clearly for API consumers to facilitate correct usage.\n- Implement error handling to provide meaningful feedback when unsupported or invalid types are supplied.\n- Keep the list of valid types updated in accordance with changes in NetSuite API versions and account configurations.\n\n**Examples**\n- \"customer\" — to specify criteria related to customer records.\n- \"transaction\" — to filter based on transaction data such as sales orders or invoices.\n- \"item\" — to apply criteria on inventory or product items.\n- \"employee\" — to target employee-related data.\n- \"vendor\" — to filter vendor or supplier records.\n- \"customrecord_xyz\" — to specify criteria for a custom record type identified by its script ID.\n\n**Important notes**\n- The type value directly affects the behavior of the criteria and the resulting data set.\n- Incorrect or unsupported type values may lead to API errors, empty results, or unexpected behavior.\n- The set of valid types may vary depending on the NetSuite account configuration, customizations, and API version.\n- Always refer to the latest official NetSuite API documentation and your account’s schema for supported types.\n- Some types may require additional permissions or roles to access the corresponding data.\n- The type property is mandatory for criteria filtering to function correctly.\n\n**Dependency chain**\n- Depends on the overall criteria object structure within NetSuite.restlet.criteria.\n- Influences the available fields, operators, and values within the criteria definition"},"join":{"type":"string","description":"join: Specifies the criteria used to join related records or tables in a NetSuite RESTlet query, enabling the retrieval of data based on relationships between different record types.\n**Field behavior**\n- Defines the relationship or link between the primary record and a related record for filtering or data retrieval.\n- Determines how records from different tables are combined based on matching fields.\n- Supports nested joins to allow complex queries involving multiple related records.\n**Implementation guidance**\n- Use valid join names as defined in NetSuite’s schema or documentation for the specific record types.\n- Ensure the join criteria align with the intended relationship to avoid incorrect or empty query results.\n- Combine with appropriate filters on the joined records to refine query results.\n- Validate join paths to prevent errors during query execution.\n**Examples**\n- Joining a customer record to its related sales orders using \"salesOrder\" as the join.\n- Using \"item\" join to filter transactions based on item attributes.\n- Nested join example: joining from a sales order to its customer and then to the customer’s address.\n**Important notes**\n- Incorrect join names or paths can cause query failures or unexpected results.\n- Joins may impact query performance; use them judiciously.\n- Not all record types support all possible joins; consult NetSuite documentation.\n- Joins are case-sensitive and must match NetSuite’s API specifications exactly.\n**Dependency chain**\n- Depends on the base record type specified in the query.\n- Works in conjunction with filter criteria to refine results.\n- May depend on authentication and permissions to access related records.\n**Technical details**\n- Typically represented as a string or object indicating the join path.\n- Used within the criteria object of a RESTlet query payload.\n- Supports multiple levels of nesting for complex joins.\n- Must conform to NetSuite’s SuiteScript or RESTlet API join syntax and conventions."},"operator":{"type":"string","description":"operator: >\n  Specifies the comparison operator used to evaluate the criteria in a NetSuite RESTlet request.\n  This operator determines how the field value is compared against the specified criteria value(s) to filter or query records.\n  **Field behavior**\n  - Defines the type of comparison between a field and a value (e.g., equality, inequality, greater than).\n  - Influences the logic of the criteria evaluation in RESTlet queries.\n  - Supports various operators such as equals, not equals, greater than, less than, contains, etc.\n  **Implementation guidance**\n  - Use valid NetSuite-supported operators to ensure correct query behavior.\n  - Match the operator type with the data type of the field being compared (e.g., use numeric operators for numeric fields).\n  - Combine multiple criteria with appropriate logical operators if needed.\n  - Validate operator values to prevent errors in RESTlet execution.\n  **Examples**\n  - \"operator\": \"is\" (checks if the field value is equal to the specified value)\n  - \"operator\": \"isnot\" (checks if the field value is not equal to the specified value)\n  - \"operator\": \"greaterthan\" (checks if the field value is greater than the specified value)\n  - \"operator\": \"contains\" (checks if the field value contains the specified substring)\n  **Important notes**\n  - The operator must be compatible with the field type and the value provided.\n  - Incorrect operator usage can lead to unexpected query results or errors.\n  - Operators are case-sensitive and should match NetSuite's expected operator strings.\n  **Dependency chain**\n  - Depends on the field specified in the criteria to determine valid operators.\n  - Works in conjunction with the criteria value(s) to form a complete condition.\n  - May be combined with logical operators when multiple criteria are used.\n  **Technical details**\n  - Typically represented as a string value in the RESTlet criteria JSON object.\n  - Supported operators align with NetSuite's SuiteScript search operators.\n  - Must conform to the list of operators recognized by the NetSuite RESTlet API."},"searchValue":{"type":"object","description":"searchValue: The value used as the search criterion to filter results in the NetSuite RESTlet API. This value is matched against the specified search field to retrieve relevant records based on the search parameters provided.\n**Field behavior**\n- Acts as the primary input for filtering search results.\n- Supports various data types depending on the search field (e.g., string, number, date).\n- Used in conjunction with other search criteria to refine query results.\n- Can be a partial or full match depending on the search configuration.\n**Implementation guidance**\n- Ensure the value type matches the expected type of the search field.\n- Validate the input to prevent injection attacks or malformed queries.\n- Use appropriate encoding if the value contains special characters.\n- Combine with logical operators or additional criteria for complex searches.\n**Examples**\n- \"Acme Corporation\" for searching customer names.\n- 1001 for searching by internal record ID.\n- \"2024-01-01\" for searching records created on or after a specific date.\n- \"Pending\" for filtering records by status.\n**Important notes**\n- The effectiveness of the search depends on the accuracy and format of the searchValue.\n- Case sensitivity may vary based on the underlying NetSuite configuration.\n- Large or complex search values may impact performance.\n- Null or empty values may result in no filtering or return all records.\n**Dependency chain**\n- Depends on the searchField property to determine which field the searchValue applies to.\n- Works alongside searchOperator to define how the searchValue is compared.\n- Influences the results returned by the RESTlet endpoint.\n**Technical details**\n- Typically passed as a string in the API request payload.\n- May require serialization or formatting based on the API specification.\n- Integrated into the NetSuite search query logic on the server side.\n- Subject to NetSuite’s search limitations and indexing capabilities."},"searchValue2":{"type":"object","description":"searchValue2 is an optional property used to specify the second value in a search criterion within the NetSuite RESTlet API. It is typically used in conjunction with search operators that require two values, such as \"between\" or \"not between,\" to define a range or a pair of comparison values.\n\n**Field behavior**\n- Represents the second operand or value in a search condition.\n- Used primarily with operators that require two values (e.g., \"between\", \"not between\").\n- Optional field; may be omitted if the operator only requires a single value.\n- Works alongside searchValue (the first value) to form a complete search criterion.\n\n**Implementation guidance**\n- Ensure that searchValue2 is provided only when the selected operator requires two values.\n- Validate the data type of searchValue2 to match the expected type for the field being searched (e.g., date, number, string).\n- When using range-based operators, searchValue2 should represent the upper bound or second boundary of the range.\n- If the operator does not require a second value, omit this property to avoid errors.\n\n**Examples**\n- For a date range search: searchValue = \"2023-01-01\", searchValue2 = \"2023-12-31\" with operator \"between\".\n- For a numeric range: searchValue = 100, searchValue2 = 200 with operator \"between\".\n- For a \"not between\" operator: searchValue = 50, searchValue2 = 100.\n\n**Important notes**\n- Providing searchValue2 without a compatible operator may result in an invalid search query.\n- The data type and format of searchValue2 must be consistent with searchValue and the field being queried.\n- This property is ignored if the operator only requires a single value.\n- Proper validation and error handling should be implemented when processing this field.\n\n**Dependency chain**\n- Dependent on the \"operator\" property within the same search criterion.\n- Works in conjunction with \"searchValue\" to define the search condition.\n- Part of the \"criteria\" array or object in the NetSuite RESTlet search request.\n\n**Technical details**\n- Data type varies depending on the field being searched (string, number, date, etc.).\n- Typically serialized as a JSON property in the RESTlet request payload.\n- Must conform to the expected format for the field and operator to avoid API errors.\n- Used internally by NetSuite to construct the appropriate"},"formula":{"type":"string","description":"formula: >\n  A string representing a custom formula used to define criteria for filtering or querying data within the NetSuite RESTlet API.\n  This formula allows users to specify complex conditions using NetSuite's formula syntax, enabling advanced and flexible data retrieval.\n  **Field behavior**\n  - Accepts a formula expression as a string that defines custom filtering logic.\n  - Used to create dynamic and complex criteria beyond standard field-value comparisons.\n  - Evaluated by the NetSuite backend to filter records according to the specified logic.\n  - Can incorporate NetSuite formula functions, operators, and field references.\n  **Implementation guidance**\n  - Ensure the formula syntax complies with NetSuite's formula language and supported functions.\n  - Validate the formula string before submission to avoid runtime errors.\n  - Use this field when standard criteria fields are insufficient for the required filtering.\n  - Combine with other criteria fields as needed to build comprehensive queries.\n  **Examples**\n  - \"CASE WHEN {status} = 'Open' THEN 1 ELSE 0 END = 1\"\n  - \"TO_DATE({createddate}) >= TO_DATE('2023-01-01')\"\n  - \"NVL({amount}, 0) > 1000\"\n  **Important notes**\n  - Incorrect or invalid formulas may cause the API request to fail or return errors.\n  - The formula must be compatible with the context of the query and the fields available.\n  - Performance may be impacted if complex formulas are used extensively.\n  - Formula evaluation is subject to NetSuite's formula engine capabilities and limitations.\n  **Dependency chain**\n  - Depends on the availability of fields referenced within the formula.\n  - Works in conjunction with other criteria properties in the request.\n  - Requires understanding of NetSuite's formula syntax and functions.\n  **Technical details**\n  - Data type: string.\n  - Supports NetSuite formula syntax including SQL-like expressions and functions.\n  - Evaluated server-side during the processing of the RESTlet request.\n  - Must be URL-encoded if included in query parameters of HTTP requests."},"_id":{"type":"object","description":"Unique identifier for the record within the NetSuite system.\n**Field behavior**\n- Serves as the primary key to uniquely identify a specific record.\n- Immutable once the record is created.\n- Used to retrieve, update, or delete the corresponding record.\n**Implementation guidance**\n- Must be a valid NetSuite internal ID format, typically a string or numeric value.\n- Should be provided when querying or manipulating a specific record.\n- Avoid altering this value to maintain data integrity.\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"abcde12345\"\n**Important notes**\n- This ID is assigned by NetSuite and should not be generated manually.\n- Ensure the ID corresponds to an existing record to avoid errors.\n- When used in criteria, it filters the dataset to the exact record matching this ID.\n**Dependency chain**\n- Dependent on the existence of the record in NetSuite.\n- Often used in conjunction with other criteria fields for precise querying.\n**Technical details**\n- Typically represented as a string or integer data type.\n- Used in RESTlet scripts as part of the criteria object to specify the target record.\n- May be included in URL parameters or request bodies depending on the API design."}},"description":"criteria: >\n  Defines the set of conditions or filters used to specify which records should be retrieved or affected by the NetSuite RESTlet operation. This property enables clients to precisely narrow down the dataset by applying one or more criteria based on record fields, comparison operators, and values, supporting complex logical combinations to tailor the query results.\n\n  **Field behavior**\n  - Accepts a structured object or an array representing one or multiple filtering conditions.\n  - Supports logical operators such as AND, OR, and nested groupings to combine multiple criteria flexibly.\n  - Each criterion typically includes a field name, an operator (e.g., equals, contains, greaterThan), and a value or set of values.\n  - Enables filtering on various data types including strings, numbers, dates, and booleans.\n  - Used to limit the scope of data returned or manipulated by the RESTlet to only those records that meet the specified conditions.\n  - When omitted or empty, the RESTlet may return all records or apply default filtering behavior as defined by the implementation.\n\n  **Implementation guidance**\n  - Validate the criteria structure rigorously to ensure it conforms to the expected schema before processing.\n  - Support nested criteria groups to allow complex and hierarchical filtering logic.\n  - Map criteria fields and operators accurately to corresponding NetSuite record fields and search operators, considering data types and operator compatibility.\n  - Handle empty or undefined criteria gracefully by returning all records or applying sensible default filters.\n  - Sanitize all input values to prevent injection attacks, malformed queries, or unexpected behavior.\n  - Provide clear error messages when criteria are invalid or unsupported.\n  - Optimize query performance by translating criteria into efficient NetSuite search queries.\n\n  **Examples**\n  - A single criterion filtering records where status equals \"Open\":\n    `{ \"field\": \"status\", \"operator\": \"equals\", \"value\": \"Open\" }`\n  - Multiple criteria combined with AND logic:\n    `[{\"field\": \"status\", \"operator\": \"equals\", \"value\": \"Open\"}, {\"field\": \"priority\", \"operator\": \"greaterThan\", \"value\": 2}]`\n  - Nested criteria combining OR and AND:\n    `{ \"operator\": \"OR\", \"criteria\": [ {\"field\": \"status\", \"operator\": \"equals\", \"value\": \"Open\"}, { \"operator\": \"AND\", \"criteria\": [ {\"field\": \"priority\", \"operator\": \"greaterThan\", \"value\":"},"columns":{"type":"object","properties":{"type":{"type":"array","items":{"type":"object"},"description":"The type property specifies the data type of the column in the NetSuite RESTlet response. It defines how the data in the column should be interpreted, validated, and handled by the client application to ensure accurate processing and display.\n\n**Field behavior**\n- Indicates the specific data type of the column (e.g., string, integer, date).\n- Determines the format, validation rules, and parsing logic applied to the column data.\n- Guides client applications in correctly interpreting and processing the data.\n- Influences data presentation, transformation, and serialization in the user interface or downstream systems.\n- Helps enforce data consistency and integrity across different components consuming the API.\n\n**Implementation guidance**\n- Use standardized data type names consistent with NetSuite’s native data types and conventions.\n- Ensure the specified type accurately reflects the actual data returned in the column to prevent parsing or runtime errors.\n- Support and validate common data types such as text (string), number (integer, float), date, datetime, boolean, and currency.\n- Validate the type value against a predefined, documented list of acceptable types to maintain consistency.\n- Clearly document any custom or extended data types if they are introduced beyond standard NetSuite types.\n- Consider locale and formatting standards (e.g., ISO 8601 for dates) when defining and interpreting types.\n\n**Examples**\n- \"string\" — for textual or alphanumeric data.\n- \"integer\" — for whole numeric values without decimals.\n- \"float\" — for numeric values with decimals (if supported).\n- \"date\" — for date values without time components, formatted as YYYY-MM-DD.\n- \"datetime\" — for combined date and time values, typically in ISO 8601 format.\n- \"boolean\" — for true/false or yes/no values.\n- \"currency\" — for monetary values, often including currency symbols or codes.\n\n**Important notes**\n- The type property is essential for ensuring data integrity and enabling correct client-side processing and validation.\n- Incorrect or mismatched type specifications can lead to data misinterpretation, parsing failures, or runtime errors.\n- Some data types require strict formatting standards (e.g., ISO 8601 for date and datetime) to ensure interoperability.\n- This property is typically mandatory for each column to guarantee predictable behavior.\n- Changes to the type property should be managed carefully to avoid breaking existing integrations.\n\n**Dependency chain**\n- Depends on the actual data returned by the NetSuite RESTlet for the column.\n- Influences"},"join":{"type":"string","description":"join: Specifies the join relationship to be used when retrieving or manipulating data through the NetSuite RESTlet API. This property defines how related records are linked together, enabling the inclusion of fields from associated records in the query or operation.\n\n**Field behavior**\n- Determines the type of join between the primary record and related records.\n- Enables access to fields from related records by specifying the join path.\n- Influences the scope and depth of data retrieved or affected by the API call.\n- Supports nested joins to traverse multiple levels of related records.\n\n**Implementation guidance**\n- Use valid join names as defined in the NetSuite schema for the specific record type.\n- Ensure the join relationship exists and is supported by the RESTlet endpoint.\n- Combine with column definitions to specify which fields from the joined records to include.\n- Validate join paths to prevent errors or unexpected results in data retrieval.\n- Consider performance implications when using multiple or complex joins.\n\n**Examples**\n- \"customer\" to join the transaction record with the related customer record.\n- \"item\" to join a sales order with the associated item records.\n- \"employee.manager\" to join an employee record with their manager's record.\n- \"vendor\" to join a purchase order with the vendor record.\n\n**Important notes**\n- Incorrect or unsupported join names will result in API errors.\n- Joins are case-sensitive and must match the exact join names defined in NetSuite.\n- Not all record types support all join relationships.\n- The join property works in conjunction with the columns property to specify which fields to retrieve.\n- Using joins may increase the complexity and execution time of the API call.\n\n**Dependency chain**\n- Depends on the base record type being queried or manipulated.\n- Works together with the columns property to define the data structure.\n- May depend on user permissions to access related records.\n- Influences the structure of the response payload.\n\n**Technical details**\n- Represented as a string indicating the join path.\n- Supports dot notation for nested joins (e.g., \"employee.manager\").\n- Used in RESTlet scripts to customize data retrieval.\n- Must conform to NetSuite's internal join naming conventions.\n- Typically included in the columns array objects to specify joined fields."},"summary":{"type":"object","properties":{"type":{"type":"string","description":"Type of the summary column, indicating the aggregation or calculation applied to the data in this column.\n**Field behavior**\n- Specifies the kind of summary operation performed on the column data, such as sum, count, average, minimum, or maximum.\n- Determines how the data in the column is aggregated or summarized in the report or query result.\n- Influences the output format and the meaning of the values in the summary column.\n**Implementation guidance**\n- Use predefined summary types supported by the NetSuite RESTlet API to ensure compatibility.\n- Validate the type value against allowed summary operations to prevent errors.\n- Ensure that the summary type is appropriate for the data type of the column (e.g., sum for numeric fields).\n- Document the summary type clearly to aid users in understanding the aggregation applied.\n**Examples**\n- \"SUM\" — calculates the total sum of the values in the column.\n- \"COUNT\" — counts the number of entries or records.\n- \"AVG\" — computes the average value.\n- \"MIN\" — finds the minimum value.\n- \"MAX\" — finds the maximum value.\n**Important notes**\n- The summary type must be supported by the underlying NetSuite system to function correctly.\n- Incorrect summary types may lead to errors or misleading data in reports.\n- Some summary types may not be applicable to certain data types (e.g., average on text fields).\n**Dependency chain**\n- Depends on the column data type to determine valid summary types.\n- Interacts with the overall report or query configuration to produce summarized results.\n- May affect downstream processing or display logic based on the summary output.\n**Technical details**\n- Typically represented as a string value corresponding to the summary operation.\n- Mapped internally to NetSuite’s summary functions in saved searches or reports.\n- Case-insensitive but recommended to use uppercase for consistency.\n- Must conform to the enumeration of allowed summary types defined by the API."},"enum":{"type":"object","description":"enum: >\n  Specifies the set of predefined constant values that the property can take, representing an enumeration.\n  **Field behavior**\n  - Defines a fixed list of allowed values for the property.\n  - Restricts the property's value to one of the enumerated options.\n  - Used to enforce data integrity and consistency.\n  **Implementation guidance**\n  - Enumerated values should be clearly defined and documented.\n  - Use meaningful and descriptive names for each enum value.\n  - Ensure the enum list is exhaustive for the intended use case.\n  - Validate input against the enum values to prevent invalid data.\n  **Examples**\n  - [\"Pending\", \"Approved\", \"Rejected\"]\n  - [\"Small\", \"Medium\", \"Large\"]\n  - [\"Red\", \"Green\", \"Blue\"]\n  **Important notes**\n  - Enum values are case-sensitive unless otherwise specified.\n  - Adding or removing enum values may impact backward compatibility.\n  - Enum should be used when the set of possible values is known and fixed.\n  **Dependency chain**\n  - Typically used in conjunction with the property type (e.g., string or integer).\n  - May influence validation logic and UI dropdown options.\n  **Technical details**\n  - Represented as an array of strings or numbers defining allowed values.\n  - Often implemented as a constant or static list in code.\n  - Used by client and server-side validation mechanisms."},"lowercase":{"type":"object","description":"A boolean property indicating whether the summary column values should be converted to lowercase.\n\n**Field behavior**\n- When set to true, all text values in the summary column are transformed to lowercase.\n- When set to false or omitted, the original casing of the summary column values is preserved.\n- Primarily affects string-type summary columns; non-string values remain unaffected.\n\n**Implementation guidance**\n- Use this property to normalize text data for consistent processing or comparison.\n- Ensure that the transformation to lowercase does not interfere with case-sensitive data requirements.\n- Apply this setting during data retrieval or before output formatting in the RESTlet response.\n\n**Examples**\n- `lowercase: true` — converts \"Example Text\" to \"example text\".\n- `lowercase: false` — retains \"Example Text\" as is.\n- Property omitted — defaults to no case transformation.\n\n**Important notes**\n- This property only affects the summary columns specified in the RESTlet response.\n- It does not modify the underlying data in NetSuite, only the output representation.\n- Use with caution when case sensitivity is important for downstream processing.\n\n**Dependency chain**\n- Depends on the presence of summary columns in the RESTlet response.\n- May interact with other formatting or transformation properties applied to columns.\n\n**Technical details**\n- Implemented as a boolean flag within the summary column configuration.\n- Transformation is applied at the data serialization stage before sending the response.\n- Compatible with string data types; other data types bypass this transformation."}},"description":"A brief textual summary providing an overview or key points related to the item or record represented by this property. This summary is intended to give users a quick understanding without needing to delve into detailed data.\n\n**Field behavior**\n- Contains concise, human-readable text summarizing the main aspects or highlights of the associated record.\n- Typically used in list views, reports, or previews where a quick reference is needed.\n- May be displayed in user interfaces to provide context or a snapshot of the data.\n- Can be optional or mandatory depending on the specific API implementation.\n\n**Implementation guidance**\n- Ensure the summary is clear, concise, and informative, avoiding overly technical jargon unless appropriate.\n- Limit the length to a reasonable number of characters to maintain readability in UI components.\n- Sanitize input to prevent injection of malicious content if the summary is user-generated.\n- Support localization if the API serves multi-lingual environments.\n- Update the summary whenever the underlying data it represents changes to keep it accurate.\n\n**Examples**\n- \"Quarterly sales report showing a 15% increase in revenue.\"\n- \"Customer profile summary including recent orders and contact info.\"\n- \"Inventory item overview highlighting stock levels and reorder status.\"\n- \"Project status summary indicating milestones achieved and pending tasks.\"\n\n**Important notes**\n- The summary should not contain sensitive or confidential information unless properly secured.\n- It is distinct from detailed descriptions or full data fields; it serves as a high-level overview.\n- Consistency in style and format across summaries improves user experience.\n- May be truncated in some UI contexts; design summaries to convey essential information upfront.\n\n**Dependency chain**\n- Often derived from or related to other detailed fields within the record.\n- May influence or be influenced by display components or reporting modules consuming this property.\n- Could be linked to user permissions determining visibility of summary content.\n\n**Technical details**\n- Data type is typically a string.\n- May have a maximum length constraint depending on API or database schema.\n- Stored and transmitted as plain text; consider encoding if special characters are used.\n- Accessible via the RESTlet API under the path `netsuite.restlet.columns.summary`."},"formula":{"type":"string","description":"formula: >\n  A string representing a custom formula used to calculate or derive values dynamically within the context of the NetSuite RESTlet columns. This formula can include field references, operators, and functions supported by NetSuite's formula syntax to perform computations or conditional logic on record data.\n\n  **Field behavior:**\n  - Accepts a formula expression as a string that defines how to compute the column's value.\n  - Can reference other fields, constants, and use NetSuite-supported functions and operators.\n  - Evaluated at runtime to produce dynamic results based on the current record data.\n  - Used primarily in saved searches, reports, or RESTlet responses to customize output.\n  \n  **Implementation guidance:**\n  - Ensure the formula syntax complies with NetSuite's formula language and supported functions.\n  - Validate the formula string to prevent errors during execution.\n  - Use field IDs or aliases correctly within the formula to reference data fields.\n  - Test formulas thoroughly in NetSuite UI before deploying via RESTlet to ensure correctness.\n  - Consider performance implications of complex formulas on large datasets.\n  \n  **Examples:**\n  - \"CASE WHEN {status} = 'Open' THEN 1 ELSE 0 END\" — returns 1 if status is Open, else 0.\n  - \"NVL({amount}, 0) * 0.1\" — calculates 10% of the amount, treating null as zero.\n  - \"TO_CHAR({trandate}, 'YYYY-MM-DD')\" — formats the transaction date as a string.\n  \n  **Important notes:**\n  - The formula must be compatible with the context in which it is used (e.g., search column, RESTlet).\n  - Incorrect formulas can cause runtime errors or unexpected results.\n  - Some functions or operators may not be supported depending on the NetSuite version or API context.\n  - Formula evaluation respects user permissions and data visibility.\n  \n  **Dependency chain:**\n  - Depends on the availability of referenced fields within the record or search context.\n  - Relies on NetSuite's formula parsing and evaluation engine.\n  - Interacts with the RESTlet execution environment to produce output.\n  \n  **Technical details:**\n  - Data type: string containing a formula expression.\n  - Supports NetSuite formula syntax including SQL-like CASE statements, arithmetic operations, and built-in functions.\n  - Evaluated server-side during RESTlet execution or saved search processing.\n  - Must be URL-"},"label":{"type":"string","description":"label: |\n  The display name or title of the column as it appears in the user interface or reports.\n  **Field behavior**\n  - Represents the human-readable name for a column in a dataset or report.\n  - Used to identify the column in UI elements such as tables, forms, or export files.\n  - Should be concise yet descriptive enough to convey the column’s content.\n  **Implementation guidance**\n  - Ensure the label is localized if the application supports multiple languages.\n  - Avoid using technical jargon; prefer user-friendly terminology.\n  - Keep the label length reasonable to prevent UI truncation.\n  - Update the label consistently when the underlying data or purpose changes.\n  **Examples**\n  - \"Customer Name\"\n  - \"Invoice Date\"\n  - \"Total Amount\"\n  - \"Status\"\n  **Important notes**\n  - The label does not affect the data or the column’s functionality; it is purely for display.\n  - Changing the label does not impact data processing or storage.\n  - Labels should be unique within the same context to avoid confusion.\n  **Dependency chain**\n  - Depends on the column definition within the dataset or report configuration.\n  - May be linked to localization resources if internationalization is supported.\n  **Technical details**\n  - Typically a string data type.\n  - May support Unicode characters for internationalization.\n  - Stored as metadata associated with the column definition in the system."},"sort":{"type":"boolean","description":"sort: >\n  Specifies the sorting order for the column values in the query results.\n  **Field behavior**\n  - Determines the order in which the data is returned based on the column values.\n  - Accepts values that indicate ascending or descending order.\n  - Influences how the dataset is organized before being returned by the API.\n  **Implementation guidance**\n  - Use standardized values such as \"asc\" for ascending and \"desc\" for descending.\n  - Ensure the sort parameter corresponds to a valid column in the dataset.\n  - Multiple sort parameters may be supported to define secondary sorting criteria.\n  - Validate the sort value to prevent errors or unexpected behavior.\n  **Examples**\n  - \"asc\" to sort the column values in ascending order.\n  - \"desc\" to sort the column values in descending order.\n  **Important notes**\n  - Sorting can impact performance, especially on large datasets.\n  - If no sort parameter is provided, the default sorting behavior of the API applies.\n  - Sorting is case-insensitive.\n  **Dependency chain**\n  - Depends on the column specified in the query or request.\n  - May interact with pagination parameters to determine the final data output.\n  **Technical details**\n  - Typically implemented as a string value in the API request.\n  - May be part of a query string or request body depending on the API design.\n  - Sorting logic is handled server-side before data is returned to the client."}},"description":"columns: >\n  Specifies the set of columns (fields) to be retrieved or manipulated in the NetSuite RESTlet operation. This property defines which specific fields from the records should be included in the response or used during processing, enabling precise control over the data returned or affected. By selecting only relevant columns, it helps optimize performance and reduce payload size, ensuring efficient data handling tailored to the operation’s requirements.\n\n  **Field behavior**\n  - Determines the exact fields (columns) to be included in data retrieval, update, or manipulation operations.\n  - Supports specifying multiple columns to customize the dataset returned or processed.\n  - Limits the data payload by including only the specified columns, improving performance and reducing bandwidth.\n  - Influences the structure, content, and size of the response from the RESTlet.\n  - If omitted, defaults to retrieving all available columns for the target record type, which may impact performance.\n  - Columns specified must be valid and accessible for the target record type to avoid errors.\n\n  **Implementation guidance**\n  - Accepts an array or list of column identifiers, which can be simple strings or objects with detailed specifications (e.g., `{ name: \"fieldname\" }`).\n  - Column identifiers should correspond exactly to valid NetSuite record field names or internal IDs.\n  - Validate column names against the target record schema before execution to prevent runtime errors.\n  - Use this property to optimize RESTlet calls by limiting data to only necessary fields, especially in large datasets.\n  - When specifying complex columns (e.g., joined fields or formula fields), ensure the correct syntax and structure are used.\n  - Consider the permissions and roles associated with the RESTlet user to ensure access to the specified columns.\n\n  **Examples**\n  - `[\"internalid\", \"entityid\", \"email\"]` — retrieves basic identifying and contact fields.\n  - `[ { name: \"internalid\" }, { name: \"entityid\" }, { name: \"email\" } ]` — object notation for specifying columns.\n  - `[\"tranid\", \"amount\", \"status\"]` — retrieves transaction-specific fields.\n  - `[ { name: \"custbody_custom_field\" }, { name: \"createddate\" } ]` — includes custom and system fields.\n  - `[\"item\", \"quantity\", \"rate\"]` — fields relevant to item records or line items.\n\n  **Important notes**\n  - Omitting the `columns` property typically"},"markExportedBatchSize":{"type":"object","properties":{"type":{"type":"number","description":"type: >\n  Specifies the data type of the `markExportedBatchSize` property, defining the kind of value it accepts or represents. This property is crucial for ensuring that the batch size value is correctly interpreted, validated, and processed by the API. It dictates how the value is serialized and deserialized during API communication, thereby maintaining data integrity and consistency across different system components.\n  **Field behavior**\n  - Determines the expected format and constraints of the `markExportedBatchSize` value.\n  - Influences validation rules applied to the batch size input to prevent invalid data.\n  - Guides serialization and deserialization processes for accurate data exchange.\n  - Ensures compatibility with client and server-side processing logic.\n  **Implementation guidance**\n  - Must be assigned a valid and recognized data type within the API schema, such as \"integer\" or \"string\".\n  - Should align precisely with the nature of the batch size value to avoid type mismatches.\n  - Implement strict validation checks to confirm the value conforms to the specified type before processing.\n  - Consider the implications of the chosen type on downstream processing and storage.\n  **Examples**\n  - `\"integer\"` — indicating the batch size is represented as a whole number.\n  - `\"string\"` — if the batch size is provided as a textual representation.\n  - `\"number\"` — for numeric values that may include decimals (less common for batch sizes).\n  **Important notes**\n  - The `type` must consistently reflect the actual data format of `markExportedBatchSize` to prevent runtime errors.\n  - Mismatched or incorrect type declarations can cause API failures, data corruption, or unexpected behavior.\n  - Changes to this property’s type should be carefully managed to maintain backward compatibility.\n  **Dependency chain**\n  - Directly defines the data handling of the `markExportedBatchSize` property.\n  - Affects validation logic and error handling in API endpoints related to batch processing.\n  - May impact client applications that consume or provide this property’s value.\n  **Technical details**\n  - Corresponds to standard JSON data types such as integer, string, boolean, etc.\n  - Utilized by the API framework to enforce type safety, ensuring data integrity during request and response cycles.\n  - Plays a role in schema validation tools and automated documentation generation.\n  - Influences serialization libraries in encoding and decoding the property value correctly."},"cLocked":{"type":"object","description":"cLocked indicates whether the batch size setting for marking exports is locked, preventing any modifications by users or automated processes. This property serves as a control mechanism to safeguard critical configuration parameters related to export batch processing.\n\n**Field behavior**\n- Represents a boolean flag that determines if the batch size configuration is immutable.\n- When set to true, the batch size cannot be altered via the user interface, API calls, or automated scripts.\n- When set to false, the batch size remains configurable and can be adjusted as operational needs evolve.\n- Changes to this flag directly affect the ability to update the batch size setting.\n\n**Implementation guidance**\n- Utilize this flag to enforce configuration stability and prevent accidental or unauthorized changes to batch size settings.\n- Validate this flag before processing any update requests to the batch size to ensure compliance.\n- Typically managed by system administrators or during initial system setup to lock down critical parameters.\n- Incorporate audit logging when this flag is changed to maintain traceability.\n- Consider integrating with role-based access controls to restrict who can toggle this flag.\n\n**Examples**\n- cLocked: true — The batch size setting is locked, disallowing any modifications.\n- cLocked: false — The batch size setting is unlocked and can be updated as needed.\n\n**Important notes**\n- Locking the batch size helps maintain consistent export throughput and prevents performance degradation caused by unintended configuration changes.\n- Modifications to this flag should be performed cautiously and ideally under change management procedures.\n- This property is only applicable in environments where batch size configuration for marking exports is relevant.\n- Ensure that dependent systems or processes respect this lock to avoid configuration conflicts.\n\n**Dependency chain**\n- Dependent on the presence of the markExportedBatchSize configuration object within the system.\n- Interacts with user permission settings and roles that govern configuration management capabilities.\n- May affect downstream export processing workflows that rely on batch size parameters.\n\n**Technical details**\n- Data type: Boolean.\n- Default value is false, indicating the batch size is unlocked unless explicitly locked.\n- Persisted as part of the NetSuite.restlet.markExportedBatchSize configuration object.\n- Changes to this property should trigger validation and possibly system notifications to administrators."},"min":{"type":"object","description":"Minimum number of records to process in a single batch during the markExported operation in the NetSuite RESTlet integration. This property sets the lower boundary for batch sizes, ensuring that each batch contains at least this number of records before processing begins. It plays a crucial role in balancing processing efficiency and system resource utilization by controlling the granularity of batch operations.\n\n**Field behavior**\n- Defines the lower limit for the batch size when processing records in the markExported operation.\n- Ensures that each batch contains at least this minimum number of records before processing.\n- Helps optimize performance by preventing excessively small batches that could increase overhead.\n- Works in conjunction with the 'max' batch size to establish a valid range for batch processing.\n- Influences how the system partitions large datasets into manageable chunks for processing.\n\n**Implementation guidance**\n- Choose a value based on system capabilities, expected record complexity, and API rate limits to avoid timeouts or throttling.\n- Ensure this value is a positive integer greater than zero.\n- Must be less than or equal to the corresponding 'max' batch size to maintain logical consistency.\n- Test different values to find an optimal balance between processing speed and resource consumption.\n- Consider the impact on downstream systems and network latency when setting this value.\n\n**Examples**\n- 10 (process at least 10 records per batch)\n- 50 (process at least 50 records per batch)\n- 100 (process at least 100 records per batch for high-throughput scenarios)\n\n**Important notes**\n- Setting this value too low may lead to inefficient processing due to increased overhead from handling many small batches.\n- Setting this value too high may cause processing delays, timeouts, or exceed API rate limits.\n- Always use in conjunction with the 'max' batch size to define a valid and effective batch size range.\n- Changes to this value should be tested in a staging environment before production deployment to assess impact.\n\n**Dependency chain**\n- Directly related to 'netsuite.restlet.markExportedBatchSize.max', which defines the upper limit of batch size.\n- Utilized within the batch processing logic of the markExported operation in the NetSuite RESTlet integration.\n- Influences and is influenced by system performance parameters and API constraints.\n\n**Technical details**\n- Data type: Integer\n- Must be a positive integer greater than zero\n- Should be validated at configuration time to ensure it does not exceed the 'max' batch size"},"max":{"type":"object","description":"Maximum number of records to process in a single batch during the markExported operation, defining the upper limit for batch size to balance performance and resource utilization effectively.\n\n**Field behavior**\n- Specifies the maximum count of records processed in one batch during the markExported operation.\n- Controls the batch size to optimize throughput while preventing system overload.\n- Helps manage memory usage and processing time by limiting batch volume.\n- Directly affects the frequency and size of API calls or processing cycles.\n\n**Implementation guidance**\n- Determine an optimal value based on system capacity, performance benchmarks, and typical workload.\n- Ensure the value complies with any API or platform-imposed batch size limits.\n- Consider network conditions, processing latency, and error handling when setting the batch size.\n- Validate that the input is a positive integer and handle invalid values gracefully.\n- Adjust dynamically if possible, based on runtime metrics or error feedback.\n\n**Examples**\n- 1000: Processes up to 1000 records per batch, suitable for balanced performance.\n- 500: Smaller batch size for environments with limited resources or higher reliability needs.\n- 2000: Larger batch size for high-throughput scenarios where system resources allow.\n- 50: Very small batch size for testing or debugging purposes.\n\n**Important notes**\n- Excessively high values may lead to timeouts, memory exhaustion, or degraded system responsiveness.\n- Very low values can increase overhead due to more frequent batch processing cycles.\n- This parameter is critical for tuning the performance and stability of the markExported operation.\n- Changes to this value should be tested in a controlled environment before production deployment.\n\n**Dependency chain**\n- Integral to the batch processing logic within the markExported operation.\n- Interacts with system-level batch size constraints and API rate limits.\n- Influences how records are chunked and iterated during export marking.\n- May affect downstream processing components that consume batch outputs.\n\n**Technical details**\n- Must be an integer value greater than zero.\n- Typically configured via API request parameters or system configuration files.\n- Should be compatible with the data processing framework and any middleware handling batch operations.\n- May require synchronization with other batch-related settings to ensure consistency."}},"description":"markExportedBatchSize: The number of records to process in each batch when marking records as exported in NetSuite via the RESTlet API. This setting controls how many records are updated in a single API call to optimize performance and resource usage during the export marking process.\n\n**Field behavior**\n- Determines the size of each batch of records to be marked as exported in NetSuite.\n- Controls the number of records processed per RESTlet API call for export status updates.\n- Directly impacts the throughput and efficiency of the export marking operation.\n- Influences the balance between processing speed and system resource consumption.\n- Helps manage API rate limits by controlling the volume of records processed per request.\n\n**Implementation guidance**\n- Select a batch size that balances efficient processing with system stability and API constraints.\n- Avoid excessively large batch sizes to prevent API timeouts, memory exhaustion, or throttling.\n- Consider the typical volume of records to be exported and the performance characteristics of your NetSuite environment.\n- Test various batch sizes under realistic load conditions to identify the optimal value.\n- Monitor API response times and error rates to adjust the batch size dynamically if needed.\n- Ensure compatibility with any rate limiting or concurrency restrictions imposed by the NetSuite RESTlet API.\n\n**Examples**\n- Setting `markExportedBatchSize` to 100 processes 100 records per batch, suitable for moderate workloads.\n- Using a batch size of 500 may be appropriate for high-volume exports on systems with robust resources.\n- A smaller batch size like 50 can help avoid API throttling or timeouts in environments with limited resources or strict rate limits.\n- Adjusting the batch size to 200 after observing API latency improvements the overall export marking throughput.\n\n**Important notes**\n- The batch size setting directly affects the speed, reliability, and resource utilization of marking records as exported.\n- Incorrect batch size configurations can cause partial updates, failed API calls, or increased processing times.\n- This property is specific to the RESTlet-based integration with NetSuite and does not apply to other export mechanisms.\n- Changes to this setting should be tested thoroughly to avoid unintended disruptions in the export workflow.\n- Consider the impact on downstream processes that depend on timely and accurate export status updates.\n\n**Dependency chain**\n- Depends on the RESTlet API endpoint responsible for marking records as exported in NetSuite.\n- Influenced by NetSuite API rate limits, timeout settings, and system performance characteristics.\n- Works in conjunction with other export configuration parameters"},"TODO":{"type":"object","description":"TODO: A placeholder property used to indicate tasks, features, or sections within the NetSuite RESTlet integration that require implementation, completion, or further development. This property functions as a clear marker for developers and project managers to identify areas that are pending work, ensuring that these tasks are tracked and addressed before finalizing the API. It is not intended to hold any functional data or be part of the production API contract until fully implemented.\n\n**Field behavior**\n- Serves as a temporary indicator for incomplete, pending, or planned tasks within the API schema.\n- Does not contain operational data or affect API functionality until properly defined and implemented.\n- Helps track development progress and highlight areas needing attention during the development lifecycle.\n- Should be removed or replaced with finalized implementations once the associated task is completed.\n- May be used to generate reports or dashboards reflecting outstanding development work.\n\n**Implementation guidance**\n- Utilize the TODO property to explicitly flag API sections requiring further coding, configuration, or review.\n- Accompany TODO entries with detailed comments or references to issue tracking systems (e.g., JIRA, GitHub Issues) for clarity and traceability.\n- Establish regular review cycles to update, resolve, or remove TODO properties to maintain an accurate representation of development status.\n- Avoid deploying TODO properties in production environments to prevent confusion, incomplete features, or potential runtime errors.\n- Integrate TODO tracking with project management workflows to ensure timely resolution.\n\n**Examples**\n- TODO: Implement OAuth 2.0 authentication mechanism for RESTlet endpoints.\n- TODO: Add comprehensive validation rules for input parameters to ensure data integrity.\n- TODO: Complete error handling and logging for data retrieval failures.\n- TODO: Optimize response payload size for improved performance.\n- TODO: Integrate unit tests covering all new RESTlet functionalities.\n\n**Important notes**\n- The presence of TODO properties signifies incomplete or provisional functionality and should not be interpreted as finalized API features.\n- Unresolved TODO items can lead to partial implementations, unexpected behavior, or runtime errors if not addressed before release.\n- Effective management and timely resolution of TODO properties are critical for maintaining code quality, project timelines, and overall system stability.\n- TODO properties should be clearly documented and communicated within the development team to avoid oversight.\n\n**Dependency chain**\n- TODO properties may depend on other modules, services, or API components that are under development or pending integration.\n- Often linked to external project management or issue tracking tools for assignment, prioritization, and progress monitoring.\n- The"},"hooks":{"type":"object","properties":{"batchSize":{"type":"number","description":"batchSize specifies the number of records or items to be processed in a single batch during the execution of the NetSuite RESTlet hook. This parameter helps control the workload size for each batch operation, optimizing performance and resource utilization by balancing processing efficiency and system constraints.\n\n**Field behavior**\n- Determines the maximum number of records or items processed in one batch cycle.\n- Directly influences the frequency and duration of batch processing operations.\n- Helps manage memory consumption and processing time by limiting the batch workload.\n- Affects overall throughput and latency of batch operations, impacting system responsiveness.\n- Controls how data is segmented and processed in discrete units during RESTlet execution.\n\n**Implementation guidance**\n- Configure batchSize based on the system’s processing capacity, expected data volume, and performance goals.\n- Use smaller batch sizes in environments with limited resources or strict execution time limits to prevent timeouts.\n- Larger batch sizes can improve throughput by reducing the number of batch cycles but may increase individual batch processing time and risk of hitting governance limits.\n- Always validate that batchSize is a positive integer greater than zero to ensure proper operation.\n- Take into account NetSuite API governance limits, such as usage units and execution time, when determining batchSize.\n- Monitor system performance and adjust batchSize dynamically if possible to optimize processing efficiency.\n- Ensure batchSize aligns with other batch-related configurations to maintain consistency and predictable behavior.\n\n**Examples**\n- batchSize: 100 — processes 100 records per batch, balancing throughput and resource use.\n- batchSize: 500 — processes 500 records per batch for higher throughput in robust environments.\n- batchSize: 10 — processes 10 records per batch for fine-grained control and minimal resource impact.\n- batchSize: 1 — processes records individually, useful for debugging or very resource-sensitive scenarios.\n\n**Important notes**\n- Excessively high batchSize values may cause processing timeouts, exceed NetSuite governance limits, or lead to memory exhaustion.\n- Very low batchSize values can result in inefficient processing due to increased overhead and more frequent batch invocations.\n- The optimal batchSize is context-dependent and should be determined through testing and monitoring.\n- batchSize should be consistent with other batch processing parameters to avoid conflicts or unexpected behavior.\n- Changes to batchSize may require adjustments in error handling and retry logic to accommodate different batch sizes.\n\n**Dependency chain**\n- Depends on the batch processing logic implemented within the RESTlet hook.\n- Influences and is influenced by"},"preSend":{"type":"object","properties":{"fileInternalId":{"type":"string","description":"fileInternalId: The unique internal identifier assigned to a file within the NetSuite system. This identifier is used to precisely reference and manipulate a specific file during API operations, particularly within pre-send processing hooks in RESTlets. It ensures accurate targeting of file resources by uniquely identifying files stored in the NetSuite file cabinet.\n\n**Field behavior**\n- Represents a unique numeric or alphanumeric identifier assigned by NetSuite to each file.\n- Used to retrieve, update, or reference a file during the preSend hook execution.\n- Must correspond to an existing file within the NetSuite file cabinet.\n- Immutable throughout the file’s lifecycle; remains constant unless the file is deleted and recreated.\n- Serves as a key reference for file-related operations in automated workflows and integrations.\n\n**Implementation guidance**\n- Always validate that the fileInternalId exists and is accessible before performing operations.\n- Use this ID to fetch file metadata, content, or perform updates within the preSend hook.\n- Implement error handling to manage cases where the fileInternalId does not correspond to a valid or accessible file.\n- Ensure that the executing user or integration has the necessary permissions to access the file referenced by this ID.\n- Avoid hardcoding this ID; retrieve dynamically when possible to maintain flexibility and accuracy.\n\n**Examples**\n- 12345\n- \"67890\"\n- \"file_98765\"\n\n**Important notes**\n- The fileInternalId is specific to each NetSuite account and environment; it is not globally unique across different accounts.\n- Do not expose this identifier publicly, as it may reveal sensitive internal system details.\n- Modifications to the file’s name, location, or metadata do not affect the internal ID.\n- This ID is essential for linking files reliably in automated processes, integrations, and RESTlet hooks.\n- Deleting and recreating a file will result in a new fileInternalId.\n\n**Dependency chain**\n- Depends on the existence of the file within the NetSuite file cabinet.\n- Requires appropriate permissions to access or manipulate the file.\n- Utilized within preSend hooks to reference files accurately during API operations.\n\n**Technical details**\n- Typically a numeric or alphanumeric string assigned by NetSuite upon file creation.\n- Stored internally within NetSuite’s database as the primary key for file records.\n- Used as a parameter in RESTlet API calls to identify and operate on specific files.\n- Immutable identifier that does not change unless the file is deleted and recreated.\n- Integral to"},"function":{"type":"string","description":"function: >\n  Specifies the name of the custom function to be invoked during the preSend hook phase in the NetSuite RESTlet integration. This function enables developers to implement custom logic for processing or modifying the request payload immediately before it is dispatched to the NetSuite RESTlet endpoint. It serves as a critical extension point for tailoring request data, adding headers, sanitizing inputs, or performing any preparatory steps necessary to meet integration requirements.\n\n  **Field behavior**\n  - Identifies the exact function to execute during the preSend hook phase.\n  - Allows customization and transformation of the outgoing request payload or context.\n  - The specified function is called synchronously or asynchronously depending on implementation support.\n  - Modifications made by this function directly affect the data sent to the NetSuite RESTlet.\n  - Must reference a valid, accessible function within the integration’s runtime environment.\n\n  **Implementation guidance**\n  - Confirm that the function name matches a defined and exported function within the integration codebase.\n  - The function should accept the current request payload or context as input and return the modified payload or context.\n  - Implement robust error handling within the function to avoid unhandled exceptions that could disrupt the request flow.\n  - Optimize the function for performance to minimize latency in request processing.\n  - If asynchronous operations are supported, ensure proper handling of promises or callbacks.\n  - Document the function’s behavior clearly to facilitate maintenance and future updates.\n\n  **Examples**\n  - \"sanitizePayload\" — cleans and validates request data before sending.\n  - \"addAuthenticationHeaders\" — injects necessary authentication tokens or headers.\n  - \"transformRequestData\" — restructures or enriches the payload to match API expectations.\n  - \"logRequestDetails\" — captures request metadata for auditing or debugging purposes.\n\n  **Important notes**\n  - The function must be correctly implemented and accessible; otherwise, runtime errors will occur.\n  - This hook executes immediately before the request is sent, so any changes here directly impact the outgoing data.\n  - Ensure that the function’s side effects do not unintentionally alter unrelated parts of the request or integration state.\n  - If asynchronous processing is used, verify that the integration framework supports it to avoid unexpected behavior.\n  - Testing the function thoroughly is critical to ensure reliable integration behavior.\n\n  **Dependency chain**\n  - Requires the preSend hook to be enabled and properly configured in the integration settings.\n  - Depends on the presence of the named"},"configuration":{"type":"object","description":"configuration: >\n  An object containing configuration settings that influence the behavior of the preSend hook in the NetSuite RESTlet integration. This object serves as a centralized control point for customizing how requests are processed and modified before being sent to the NetSuite RESTlet endpoint. It can include a variety of parameters such as authentication credentials, logging preferences, request modification flags, timeout settings, retry policies, and feature toggles that tailor the preSend hook’s operation to specific integration needs.\n  **Field behavior**\n  - Holds key-value pairs that define how the preSend hook processes and modifies outgoing requests.\n  - Can include settings such as authentication parameters (e.g., tokens, API keys), request modification flags (e.g., header adjustments), logging options (e.g., enable/disable logging), timeout durations, retry counts, and feature toggles.\n  - Is accessed and potentially updated dynamically during the execution of the preSend hook to adapt request handling based on current context or conditions.\n  - Influences the flow and outcome of the preSend hook, potentially altering request payloads, headers, or other metadata before transmission.\n  **Implementation guidance**\n  - Define clear, descriptive, and consistent keys within the configuration object to avoid ambiguity and ensure maintainability.\n  - Validate all configuration values rigorously before applying them to prevent runtime errors or unexpected behavior.\n  - Use this object to centralize control over preSend hook behavior, enabling easier updates, debugging, and feature management.\n  - Document all possible configuration options, their expected data types, default values, and their specific effects on the preSend hook’s operation.\n  - Ensure sensitive information within the configuration (e.g., authentication tokens) is handled securely, following best practices for encryption and access control.\n  - Consider versioning the configuration schema if multiple versions of the preSend hook or integration exist.\n  **Examples**\n  - `{ \"enableLogging\": true, \"authToken\": \"abc123\", \"modifyHeaders\": false }`\n  - `{ \"retryCount\": 3, \"timeout\": 5000 }`\n  - `{ \"useNonProduction\": true, \"customHeader\": \"X-Custom-Value\" }`\n  - `{ \"authenticationType\": \"OAuth2\", \"refreshToken\": \"xyz789\", \"logLevel\": \"verbose\" }`\n  - `{ \"enableCaching\": false, \"maxRetries\": 5, \"requestPriority\": \"high\" }`\n  **Important**"}},"description":"preSend is a hook function that is executed immediately before a RESTlet sends a response back to the client. It allows for last-minute modifications or logging of the response data, enabling customization of the output or performing additional processing steps prior to transmission. This hook provides a critical interception point to ensure the response adheres to business rules, compliance requirements, or client-specific formatting before it leaves the server.\n\n**Field behavior**\n- Invoked right before the RESTlet response is sent to the client.\n- Receives the response data as input and can modify it.\n- Can be used to log, audit, or transform the response payload.\n- Should return the final response object to be sent.\n- Supports both synchronous and asynchronous execution depending on the implementation.\n- Any changes made here directly impact the final output received by the client.\n\n**Implementation guidance**\n- Implement as a synchronous or asynchronous function depending on the environment and use case.\n- Ensure any modifications maintain the expected response format and data integrity.\n- Avoid long-running or blocking operations to prevent delaying the response delivery.\n- Handle errors gracefully within the hook to prevent disrupting the overall RESTlet response flow.\n- Validate the modified response to ensure it complies with API schema and client expectations.\n- Use this hook to enforce security measures such as masking sensitive data or adding audit trails.\n\n**Examples**\n- Adding a timestamp or metadata (e.g., request ID, processing duration) to the response object.\n- Masking or removing sensitive information (e.g., personal identifiers, confidential fields) from the response.\n- Logging response details for auditing or debugging purposes.\n- Transforming response data structure or formatting to match client-specific requirements.\n- Injecting additional headers or status information into the response payload.\n\n**Important notes**\n- This hook runs after all business logic but before the response is finalized and sent.\n- Modifications here directly affect what the client ultimately receives.\n- Errors thrown in this hook may cause the RESTlet to fail or return an error response.\n- Use this hook to enforce response-level policies, compliance, or data governance rules.\n- Avoid introducing side effects that could alter the idempotency or consistency of the response.\n- Testing this hook thoroughly is critical to ensure it does not unintentionally break client integrations.\n\n**Dependency chain**\n- Triggered after the main RESTlet processing logic completes and the response object is prepared.\n- Precedes the actual sending of the HTTP response to the client.\n- May depend on prior hooks or"}},"description":"hooks: >\n  An array of hook definitions that specify custom functions to be executed at various points during the lifecycle of the RESTlet script in NetSuite. These hooks enable developers to inject additional logic before or after standard processing events, allowing for extensive customization and extension of the RESTlet's behavior to meet specific business requirements.\n\n  **Field behavior**\n  - Defines one or more hooks that trigger custom code execution at designated lifecycle events.\n  - Hooks can be configured to run at standard lifecycle events such as beforeLoad, beforeSubmit, afterSubmit, or at custom-defined events tailored to specific needs.\n  - Each hook entry typically includes the event name, the callback function to execute, and optional parameters or context information.\n  - Supports both synchronous and asynchronous execution modes depending on the hook type and implementation context.\n  - Hooks execute in the order they are defined, allowing for controlled sequencing of custom logic.\n  - Hooks can modify input data, perform validations, log information, or alter output responses as needed.\n\n  **Implementation guidance**\n  - Ensure that each hook function is properly defined, accessible, and tested within the RESTlet script context to avoid runtime failures.\n  - Validate hook event names against the list of supported lifecycle events to prevent misconfiguration and errors.\n  - Use hooks to encapsulate reusable business logic, enforce data integrity, or integrate with external systems and services.\n  - Implement robust error handling within hook functions to prevent exceptions from disrupting the main RESTlet processing flow.\n  - Document each hook’s purpose, expected inputs, outputs, and side effects clearly to facilitate maintainability and future enhancements.\n  - Consider performance implications of hooks, especially those performing asynchronous operations or external calls, to maintain RESTlet responsiveness.\n  - When multiple hooks are defined for the same event, design them to avoid conflicts and ensure predictable outcomes.\n\n  **Examples**\n  - Defining a hook to validate and sanitize input data before processing a RESTlet request.\n  - Adding a hook to log detailed request and response information after the RESTlet completes execution for auditing purposes.\n  - Using a hook to modify or enrich the response payload dynamically before it is returned to the client application.\n  - Implementing a hook to trigger notifications or update related records asynchronously after data submission.\n  - Creating a custom hook event to perform additional security checks beyond standard validation.\n\n  **Important notes**\n  - Improper use or misconfiguration of hooks can lead to unexpected behavior, performance degradation, or runtime errors."},"cLocked":{"type":"object","description":"cLocked indicates whether the record is locked, preventing any modifications to its data.\n**Field behavior**\n- Represents the lock status of a record within the system.\n- When set to true, the record is locked and cannot be edited or updated.\n- When set to false, the record is unlocked and available for modifications.\n- Typically used to control concurrent access and maintain data integrity.\n**Implementation guidance**\n- Should be checked before performing update or delete operations on the record.\n- Setting this field to true should trigger UI or API restrictions on editing.\n- Ensure that only authorized users or processes can change the lock status.\n- Use this field to prevent race conditions or accidental data overwrites.\n**Examples**\n- cLocked: true — The record is locked and read-only.\n- cLocked: false — The record is unlocked and editable.\n**Important notes**\n- Locking a record does not necessarily prevent read access; it only restricts modifications.\n- The lock status may be temporary or permanent depending on business rules.\n- Changes to this field might require audit logging for compliance.\n**Dependency chain**\n- May depend on user permissions or roles to set or clear the lock.\n- Could be related to workflow states or approval processes that enforce locking.\n**Technical details**\n- Data type: Boolean.\n- Default value is typically false (unlocked).\n- Stored as a flag in the record metadata or status fields.\n- Changes to cLocked should be atomic to avoid inconsistent states."}},"description":"restlet: The identifier or URL of the NetSuite Restlet script to be invoked for performing custom server-side logic or data processing within the NetSuite environment. This property specifies which Restlet endpoint the integration or application should call to execute specific business logic, automate workflows, or retrieve and manipulate data dynamically. It can be represented as an internal script ID, a relative URL path, or a full external URL depending on the integration scenario and access method.\n\n**Field behavior**\n- Defines the specific target Restlet script or endpoint for API calls within the NetSuite environment.\n- Routes requests to custom server-side scripts developed using NetSuite’s SuiteScript framework.\n- Enables execution of tailored business processes, data validations, transformations, or integrations.\n- Supports various HTTP methods such as GET, POST, PUT, and DELETE depending on the Restlet’s implementation.\n- Can be specified as a script ID, a relative URL path, or a fully qualified URL based on deployment and access context.\n- Acts as the primary entry point for invoking custom logic that extends or complements standard NetSuite functionality.\n\n**Implementation guidance**\n- Confirm that the Restlet script is properly deployed, enabled, and accessible within the target NetSuite account.\n- Use the internal script ID format (e.g., \"customscript_my_restlet\") when calling via SuiteScript or internal APIs.\n- Use the relative URL path (e.g., \"/app/site/hosting/restlet.nl?script=123&deploy=1\") or full URL for external integrations or REST clients.\n- Verify that the Restlet supports the required HTTP methods and handles input/output data formats correctly (JSON, XML, etc.).\n- Secure the Restlet endpoint by implementing authentication mechanisms such as OAuth 2.0, token-based authentication, or NetSuite session credentials.\n- Implement robust error handling and retry logic to manage scenarios where the Restlet is unavailable or returns errors.\n- Test the Restlet thoroughly in a non-production environment before deploying to production to ensure expected behavior and security compliance.\n\n**Examples**\n- \"customscript_my_restlet\" (internal script ID used in SuiteScript calls)\n- \"/app/site/hosting/restlet.nl?script=123&deploy=1\" (relative URL for REST calls within NetSuite)\n- \"https://rest.netsuite.com/app/site/hosting/restlet.nl?script=456&deploy=2\" (full external URL for third-party integrations)\n- \"customscript_sales_order_processor\" (a Rest"},"distributed":{"type":"object","properties":{"recordType":{"type":"string","description":"The lowercase script ID of the NetSuite record type for the distributed export.\n\nMust be the exact lowercase script ID as defined in NetSuite (e.g., \"customer\", \"salesorder\", \"invoice\", \"vendorbill\").\nThis is NOT the display name - use the script ID which is always lowercase with no spaces.\n\n**Examples**\n- \"customer\"\n- \"invoice\"\n- \"salesorder\"\n- \"itemfulfillment\"\n- \"vendorbill\"\n- \"employee\"\n- \"purchaseorder\"\n- \"creditmemo\"\n\n**Important notes**\n- Must be lowercase script ID, not the display name\n- Custom record types use format \"customrecord_scriptid\""},"executionContext":{"type":"array","description":"An array of execution contexts that will trigger this distributed export.\n\nSpecifies which NetSuite execution contexts should trigger this export. When a record change occurs in one of the specified contexts, the export will be triggered.\n\n**Default value**\nIf not specified, defaults to: [\"userinterface\", \"webstore\"]\n\n**Valid values**\n- \"userinterface\" - User interactions in the NetSuite UI\n- \"webservices\" - SOAP web services calls\n- \"csvimport\" - CSV import operations\n- \"offlineclient\" - Offline client synchronization\n- \"portlet\" - Portlet interactions\n- \"scheduled\" - Scheduled script executions\n- \"suitelet\" - Suitelet executions\n- \"custommassupdate\" - Custom mass update operations\n- \"workflow\" - Workflow actions\n- \"webstore\" - Web store transactions\n- \"userevent\" - User event script triggers\n- \"mapreduce\" - Map/Reduce script operations\n- \"restlet\" - RESTlet API calls\n- \"webapplication\" - Web application interactions\n- \"restwebservices\" - REST web services calls\n\n**Example**\n```json\n[\"userinterface\", \"webstore\"]\n```","default":["userinterface","webstore"],"items":{"type":"string","enum":["userinterface","webservices","csvimport","offlineclient","portlet","scheduled","suitelet","custommassupdate","workflow","webstore","userevent","mapreduce","restlet","webapplication","restwebservices"]}},"disabled":{"type":"boolean","description":"disabled: Indicates whether the distributed feature in NetSuite is disabled or not. This boolean flag controls the availability and operational status of the distributed functionalities within the NetSuite integration, allowing administrators or systems to enable or disable these features as needed.\n\n**Field behavior**\n- When set to true, the distributed feature is fully disabled, preventing any distributed operations or workflows from executing.\n- When set to false or omitted, the distributed feature remains enabled and fully operational.\n- Acts as a toggle switch to control the accessibility of distributed capabilities within the NetSuite environment.\n- Changes to this flag directly influence the behavior of distributed-related processes and integrations.\n\n**Implementation guidance**\n- Use a boolean value: `true` to disable the distributed feature, `false` to enable it.\n- Before disabling, verify that no critical processes depend on distributed functionality to avoid disruptions.\n- Implement validation checks to confirm the current state before initiating distributed operations.\n- Provide clear user notifications or system logs when the feature is disabled to aid in troubleshooting and auditing.\n- Consider the impact on dependent modules and ensure coordinated updates if disabling this feature.\n\n**Examples**\n- `disabled: true`  # The distributed feature is turned off, disabling all related operations.\n- `disabled: false` # The distributed feature is active and available for use.\n- Omitted `disabled` property defaults to `false`, enabling the feature by default.\n\n**Important notes**\n- Disabling this feature may interrupt workflows or processes that rely on distributed capabilities, potentially causing failures or delays.\n- Some systems may require a restart or reinitialization after changing this setting for the change to take full effect.\n- Modifying this property should be restricted to users with appropriate permissions to prevent unauthorized disruptions.\n- Always assess the broader impact on the NetSuite integration before toggling this flag.\n\n**Dependency chain**\n- Directly affects modules and properties that rely on distributed functionality within the NetSuite integration.\n- Should be checked and respected by any API calls, workflows, or processes that involve distributed features.\n- May influence error handling and fallback mechanisms in distributed-related operations.\n\n**Technical details**\n- Data type: Boolean\n- Default value: `false` (distributed feature enabled)\n- Located under the `netsuite.distributed` namespace in the API schema\n- Changing this property triggers state changes in distributed feature availability within the system"},"executionType":{"type":"array","description":"An array of record operation types that will trigger this distributed export.\n\nSpecifies which types of record operations should trigger the export. When a record operation matches one of the specified types, the export will be triggered.\n\n**Default value**\nIf not specified, defaults to: [\"create\", \"edit\", \"xedit\"]\n\n**Valid values**\n- \"create\" - New record creation\n- \"edit\" - Record editing via UI\n- \"delete\" - Record deletion\n- \"xedit\" - Inline editing (edit without opening the record)\n- \"copy\" - Record copy operation\n- \"view\" - Record view\n- \"cancel\" - Transaction cancellation\n- \"approve\" - Approval action\n- \"reject\" - Rejection action\n- \"pack\" - Pack operation (fulfillment)\n- \"ship\" - Ship operation (fulfillment)\n- \"markcomplete\" - Mark as complete\n- \"reassign\" - Reassignment action\n- \"editforecast\" - Forecast editing\n- \"dropship\" - Drop ship operation\n- \"specialorder\" - Special order operation\n- \"orderitems\" - Order items action\n- \"paybills\" - Pay bills action\n- \"print\" - Print action\n- \"email\" - Email action\n\n**Example**\n```json\n[\"create\", \"edit\", \"xedit\"]\n```","default":["create","edit","xedit"],"items":{"type":"string","enum":["create","edit","delete","xedit","copy","view","cancel","approve","reject","pack","ship","markcomplete","reassign","editforecast","dropship","specialorder","orderitems","paybills","print","email"]}},"qualifier":{"type":"object","description":"qualifier: A string value used to specify a particular qualifier or modifier that further defines, categorizes, or scopes the associated data within the NetSuite distributed context. This property enables more granular identification, filtering, and processing of data by applying specific criteria or attributes relevant to business logic, integration workflows, or operational requirements. It serves as an optional but powerful tool to distinguish data subsets, enhance data semantics, and support conditional handling in distributed NetSuite environments.\n\n**Field behavior**\n- Acts as an additional identifier or modifier to refine the meaning, scope, or classification of the associated data.\n- Enables filtering, categorization, or qualification of data entries in distributed NetSuite operations based on specific business rules.\n- Typically optional but may be mandatory in certain contexts or API endpoints where precise data segmentation is required.\n- Accepts string values that correspond to predefined, standardized, or custom qualifiers recognized by the system or integration layer.\n- Supports multiple use cases including regional segmentation, priority tagging, type classification, and channel identification.\n\n**Implementation guidance**\n- Ensure the qualifier value strictly aligns with the accepted set of qualifiers defined in the business domain, integration specifications, or system configuration.\n- Implement validation mechanisms to verify that the qualifier string matches allowed or expected values to avoid errors, misclassification, or unintended behavior.\n- Adopt consistent naming conventions and formatting standards (e.g., lowercase, hyphen-separated) for qualifiers to maintain clarity, readability, and interoperability across systems.\n- Maintain comprehensive documentation of all custom and standard qualifiers used, including their intended meaning and usage scenarios, to facilitate maintenance, troubleshooting, and future integrations.\n- Consider the impact of qualifiers on downstream processing, reporting, and analytics to ensure they are leveraged effectively and do not introduce ambiguity.\n\n**Examples**\n- \"region-us\" to specify data related to the United States region.\n- \"priority-high\" to indicate transactions or records with high priority status.\n- \"type-inventory\" to qualify records associated with inventory management.\n- \"channel-online\" to denote sales or operations conducted through online channels.\n- \"segment-enterprise\" to classify data pertaining to enterprise-level customers.\n- \"status-active\" to filter or identify active records within a dataset.\n\n**Important notes**\n- The qualifier should be meaningful, contextually relevant, and aligned with the business logic to ensure accurate data interpretation.\n- Incorrect, inconsistent, or ambiguous qualifiers can lead to data misinterpretation, processing errors, or integration failures.\n- The property may interact with other filtering"},"skipExportFieldId":{"type":"string","description":"skipExportFieldId is an identifier for a specific field within the NetSuite distributed configuration that determines whether certain data should be excluded from export processes. It serves as a control mechanism to selectively omit data associated with particular fields during export operations, enabling tailored and efficient data handling.\n\n**Field behavior:**  \n- Acts as a flag or marker to skip exporting data associated with the specified field ID.  \n- When set, the export routines will omit the data linked to this field from being included in the export payload.  \n- Helps control and customize the export behavior on a per-field basis within distributed NetSuite configurations.  \n- Does not affect data visibility or storage within NetSuite; it only influences export output.  \n- Supports multiple uses in scenarios where sensitive, redundant, or irrelevant data should be excluded from exports.\n\n**Implementation guidance:**  \n- Ensure the field ID provided corresponds to a valid and existing field within the NetSuite schema to prevent export errors.  \n- Use this property to optimize export operations by excluding unnecessary or sensitive data fields, improving performance and compliance.  \n- Validate the field ID format and existence before applying it to avoid runtime issues during export.  \n- Integrate with export logic to check this property before including fields in the export output, ensuring consistent behavior.  \n- Consider maintaining a centralized list or configuration of skipExportFieldIds for easier management and auditing.  \n\n**Examples:**  \n- skipExportFieldId: \"custbody_internal_notes\" (skips exporting the internal notes custom field)  \n- skipExportFieldId: \"item_custom_field_123\" (excludes a specific item custom field from export)  \n- skipExportFieldId: \"custentity_sensitive_data\" (prevents export of sensitive customer entity data)  \n\n**Important notes:**  \n- This property only affects export operations and does not alter data storage or visibility within NetSuite.  \n- Misconfiguration may lead to incomplete data exports if critical fields are skipped unintentionally, potentially impacting downstream processes.  \n- Should be used judiciously to maintain data integrity and compliance with business rules and regulatory requirements.  \n- Changes to this property should be documented and reviewed to avoid unintended data omissions.  \n\n**Dependency chain**\n- Depends on the existence and validity of the specified field ID within the NetSuite schema.  \n- Relies on export routines to check and respect this property during data export processes.  \n- May interact with other export configuration settings that control data inclusion/exclusion."},"hooks":{"type":"object","properties":{"preSend":{"type":"object","properties":{"fileInternalId":{"type":"string","description":"fileInternalId: The unique internal identifier assigned to a file within the NetSuite system. This identifier is essential for accurately referencing and manipulating a specific file during various operations such as retrieval, update, or deletion within NetSuite's environment. It acts as a primary key that ensures precise targeting of files in automated workflows, scripts, and API calls.\n\n**Field behavior**\n- Serves as a unique and immutable key to identify a file in the NetSuite file cabinet.\n- Utilized in pre-send hooks and other automation points to specify the exact file being processed or referenced.\n- Must correspond to an existing file’s internal ID within the NetSuite account to ensure valid operations.\n- Enables consistent and reliable file operations by linking actions directly to the file’s system-assigned identifier.\n\n**Implementation guidance**\n- Always ensure the value is a valid integer that corresponds to an existing file’s internal ID in NetSuite.\n- Validate the internal ID before performing any file operations to avoid runtime errors or failed transactions.\n- Use this ID when invoking NetSuite APIs, SuiteScript, or other integration points to fetch, update, or delete files.\n- Avoid hardcoding the internal ID; instead, dynamically retrieve it through queries or API calls to maintain adaptability and reduce maintenance overhead.\n- Handle exceptions gracefully when the ID does not correspond to any file, providing meaningful error messages or fallback logic.\n\n**Examples**\n- 12345\n- 987654\n- 1001\n\n**Important notes**\n- The internal ID is system-generated by NetSuite and guaranteed to be unique within the account.\n- This ID is distinct from file names, external URLs, or folder identifiers and should not be confused with them.\n- Using an incorrect or non-existent internal ID will cause operations to fail, potentially interrupting workflows.\n- The internal ID remains constant for the lifetime of the file and does not change even if the file is moved or renamed.\n\n**Dependency chain**\n- Depends on the file existing in the NetSuite file cabinet prior to referencing.\n- Often used alongside related properties such as file name, folder ID, file type, or metadata to provide context or additional filtering.\n- May be required input for downstream processes that manipulate or validate file contents.\n\n**Technical details**\n- Represented as an integer value assigned by NetSuite upon file creation.\n- Immutable once assigned; cannot be altered or reassigned to a different file.\n- Used internally by NetSuite APIs, SuiteScript, and integration"},"function":{"type":"string","description":"function: >\n  Specifies the name of the custom function to be executed as a pre-send hook within the NetSuite distributed system. This function is invoked immediately before sending data or requests, allowing for custom processing, validation, or modification of the payload to ensure data integrity and compliance with business rules.\n\n  **Field behavior**\n  - Defines the exact function to be called prior to sending data or requests.\n  - Enables interception, inspection, and manipulation of data before transmission.\n  - Supports integration of custom business logic, validation, enrichment, or logging steps.\n  - Must reference a valid, accessible function within the current execution context or environment.\n  - The function’s execution outcome can influence whether the sending process proceeds, is modified, or is aborted.\n\n  **Implementation guidance**\n  - Ensure the function name corresponds exactly to a defined function in the codebase, script environment, or registered hooks.\n  - The function should accept the expected input parameters (such as the payload or context) and return appropriate results or modifications.\n  - Implement robust error handling within the function to prevent unhandled exceptions that could disrupt the sending workflow.\n  - Document the function’s purpose, input/output contract, and side effects clearly for maintainability and future reference.\n  - Validate that the function executes efficiently and completes promptly to avoid introducing latency or blocking the sending process.\n  - If asynchronous operations are necessary, ensure they are properly awaited or handled to guarantee completion before sending.\n  - Follow consistent naming conventions aligned with the overall codebase or organizational standards.\n\n  **Examples**\n  - \"validateCustomerData\"\n  - \"sanitizePayloadBeforeSend\"\n  - \"logPreSendActivity\"\n  - \"customAuthorizationCheck\"\n  - \"enrichOrderDetails\"\n  - \"checkInventoryAvailability\"\n\n  **Important notes**\n  - The function must be synchronous or correctly handle asynchronous behavior to ensure it completes before the send operation proceeds.\n  - If the function throws an error or returns a failure state, it may block, modify, or abort the sending process depending on the implementation.\n  - Avoid performing long-running or blocking operations within the function to maintain system responsiveness.\n  - The function should not perform irreversible side effects unless explicitly intended, as it runs prior to data transmission.\n  - Ensure the function does not introduce security vulnerabilities, such as exposing sensitive data or allowing injection attacks.\n  - Consistent and clear error reporting within the function aids in troubleshooting and operational monitoring.\n\n  **Dependency**"},"configuration":{"type":"object","description":"configuration: >\n  Configuration settings for the preSend hook in the NetSuite distributed system.\n  This property defines the parameters and options that control the behavior and execution of the preSend hook, allowing customization of how data is processed before being sent.\n  It enables fine-tuning of operational aspects such as retries, timeouts, validation rules, logging, and payload constraints to ensure reliable and efficient data transmission.\n  **Field behavior**\n  - Specifies the customizable settings that dictate how the preSend hook operates.\n  - Controls data manipulation, validation, and preparation steps prior to sending.\n  - Can include flags, thresholds, retry policies, timeout durations, logging options, and other operational parameters.\n  - May be optional or mandatory depending on the specific implementation and requirements of the preSend hook.\n  - Supports nested configuration objects to allow detailed and structured settings.\n  **Implementation guidance**\n  - Define clear, well-documented configuration options that directly impact the preSend process.\n  - Validate all configuration values rigorously to ensure they conform to expected data types, ranges, and formats.\n  - Provide sensible default values for optional parameters to enhance usability and reduce configuration errors.\n  - Ensure backward compatibility when extending or modifying configuration options.\n  - Include comprehensive documentation for each configuration parameter, including its purpose, accepted values, and effect on hook behavior.\n  - Consider security implications when allowing configuration of headers or other sensitive parameters.\n  **Examples**\n  - `{ \"retryCount\": 3, \"timeout\": 5000, \"enableLogging\": true }`\n  - `{ \"validateSchema\": true, \"maxPayloadSize\": 1048576 }`\n  - `{ \"customHeaders\": { \"X-Custom-Header\": \"value\" } }`\n  - `{ \"retryPolicy\": { \"maxAttempts\": 5, \"backoffStrategy\": \"exponential\" }, \"enableLogging\": false }`\n  - `{ \"payloadCompression\": \"gzip\", \"timeout\": 10000 }`\n  **Important notes**\n  - Incorrect or invalid configuration values can cause the preSend hook to fail or behave unpredictably.\n  - Thorough testing of configuration changes in development or staging environments is critical before deploying to production.\n  - Some configuration changes may require restarting or reinitializing the hook or related services to take effect.\n  - Sensitive configuration parameters should be handled securely to prevent exposure of confidential information.\n  - Configuration should be version-controlled and documented to facilitate maintenance"}},"description":"preSend is a hook function that is invoked immediately before a request is sent to the NetSuite API. It allows for custom processing, modification, or validation of the request payload and headers, enabling dynamic adjustments or logging prior to transmission.\n\n**Field behavior**\n- Executed synchronously or asynchronously just before the API request is dispatched to the NetSuite endpoint.\n- Receives the full request object, including headers, body, query parameters, and other relevant metadata.\n- Permits modification of any part of the request, such as altering headers, adjusting the payload, or changing query parameters.\n- Supports validation logic to ensure the request meets required criteria; throwing an error will abort the request.\n- Enables injection of dynamic data like authentication tokens, custom headers, or correlation IDs.\n- Can be used for logging or auditing outgoing request details for debugging or monitoring purposes.\n\n**Implementation guidance**\n- Implement as a function or asynchronous callback that accepts the request context object.\n- Ensure that any asynchronous operations within the hook are properly awaited to maintain request integrity.\n- Keep processing lightweight to avoid introducing latency or blocking the request pipeline.\n- Handle exceptions carefully; unhandled errors will prevent the request from being sent.\n- Centralize request customization logic here to improve maintainability and reduce duplication.\n- Avoid side effects that could impact other parts of the system or subsequent requests.\n- Validate inputs thoroughly to prevent malformed requests from being sent to the API.\n\n**Examples**\n- Adding a Bearer token or API key to the Authorization header dynamically before sending.\n- Logging the complete request payload and headers for troubleshooting network issues.\n- Modifying request parameters based on user roles or feature flags at runtime.\n- Validating that required fields are present and correctly formatted, throwing an error if validation fails.\n- Adding a unique request ID header for tracing requests across distributed systems.\n\n**Important notes**\n- This hook executes on every outgoing request, so its performance impact should be minimized.\n- Any modifications made within preSend directly affect the final request sent to NetSuite.\n- Throwing an error inside this hook will abort the request and propagate the error upstream.\n- This hook is strictly for pre-request processing and should not be used for handling responses.\n- Avoid making network calls or heavy computations inside this hook to prevent delays.\n- Ensure thread safety if the hook accesses shared resources or global state.\n\n**Dependency chain**\n- Invoked after request construction but before the request is dispatched.\n- Precedes any network transmission or retry"}},"description":"hooks: >\n  A collection of user-defined functions or callbacks that are executed at specific points during the lifecycle of the distributed process within the NetSuite integration. These hooks enable customization and extension of the default behavior by injecting custom logic before, during, or after key operations, allowing for flexible adaptation to unique business requirements and integration scenarios.\n\n  **Field behavior**\n  - Contains one or more functions or callback references mapped to specific lifecycle events.\n  - Each hook corresponds to a distinct event or stage in the distributed process, such as pre-processing, post-processing, error handling, or data transformation.\n  - Hooks are invoked automatically by the system at predefined points in the workflow.\n  - Can modify data payloads, trigger additional workflows or external API calls, perform validations, or handle errors.\n  - Supports both synchronous and asynchronous execution models depending on the hook’s purpose and implementation.\n  - Execution order of hooks for the same event is deterministic and should be documented.\n  - Hooks should be designed to avoid side effects that could impact other parts of the process.\n\n  **Implementation guidance**\n  - Define hooks as named functions or references to executable code blocks compatible with the integration environment.\n  - Ensure hooks are idempotent to prevent unintended consequences from repeated or retried executions.\n  - Validate all inputs and outputs rigorously within hooks to maintain data integrity and system stability.\n  - Use hooks to integrate with external systems, perform custom validations, enrich data, or implement business-specific logic.\n  - Document each hook’s purpose, expected inputs, outputs, and any side effects clearly for maintainability.\n  - Implement robust error handling within hooks to gracefully manage exceptions without disrupting the main process flow.\n  - Test hooks thoroughly in isolated and integrated environments to ensure reliability and performance.\n  - Consider security implications, such as data exposure or injection risks, when implementing hooks.\n\n  **Examples**\n  - A hook that validates transaction data before it is sent to NetSuite to ensure compliance with business rules.\n  - A hook that logs detailed transaction metadata after a successful operation for auditing purposes.\n  - A hook that modifies or enriches payload data during transformation stages to align with NetSuite’s schema.\n  - A hook that triggers email or system notifications upon error occurrences to alert support teams.\n  - A hook that retries failed operations with exponential backoff to improve resilience.\n\n  **Important notes**\n  - Improperly implemented hooks can cause process failures, data inconsistencies, or performance degradation."},"sublists":{"type":"object","description":"sublists: >\n  A collection of related sublist objects associated with the main record, representing grouped sets of data entries that provide additional details or linked information within the NetSuite distributed record context. These sublists enable the organization of complex, hierarchical data by encapsulating related records or line items that belong to the primary record, facilitating detailed data management and interaction within the system.\n  **Field behavior**\n  - Contains multiple sublist entries, each representing a distinct group of related data tied to the main record.\n  - Organizes and structures complex record information into manageable, logically grouped sections.\n  - Supports nested and hierarchical data representation, allowing for detailed and granular record composition.\n  - Typically handled as arrays or lists of sublist objects, supporting iteration and manipulation.\n  - Reflects one-to-many relationships inherent in NetSuite records, such as line items or related entities.\n  **Implementation guidance**\n  - Ensure each sublist object strictly adheres to the defined schema and data types for its specific sublist type.\n  - Validate the consistency and referential integrity of sublist data in relation to the main record to prevent data anomalies.\n  - Design for dynamic handling of sublists, accommodating varying sizes including empty or large collections.\n  - Implement robust CRUD (Create, Read, Update, Delete) operations for sublist entries to maintain accurate and up-to-date data.\n  - Consider transactional integrity when modifying sublists to ensure changes are atomic and consistent.\n  **Examples**\n  - A sales order record containing sublists for item lines detailing products, quantities, and prices; shipping addresses specifying delivery locations; and payment schedules outlining installment plans.\n  - An employee record with sublists for dependents listing family members; employment history capturing previous roles and durations; and certifications documenting professional qualifications.\n  - A customer record including sublists for contacts with communication details; transactions recording purchase history; and communication logs tracking interactions and notes.\n  **Important notes**\n  - Sublists are critical for accurately modeling one-to-many relationships within NetSuite records, enabling detailed data capture and reporting.\n  - Modifications to sublists can trigger business logic, workflows, or validations that affect overall record processing.\n  - Maintaining synchronization between sublists and the main record is essential to preserve data integrity and prevent inconsistencies.\n  - Performance considerations should be taken into account when handling large sublists to optimize system responsiveness.\n  **Dependency chain**\n  - Depends on the main record schema"},"referencedFields":{"type":"object","description":"referencedFields: >\n  A list of field identifiers that are referenced within the current context, typically used to denote dependencies or relationships between fields in a NetSuite distributed environment. This property helps in mapping out how different fields interact or rely on each other, facilitating data integrity, validation, and synchronization across distributed components or services.\n  **Field behavior**\n  - Contains identifiers of fields that the current field or process depends on or interacts with.\n  - Used to establish explicit relationships or dependencies between multiple fields.\n  - Enables tracking of data flow and ensures consistency across distributed systems.\n  - Supports dynamic resolution of dependencies during runtime or configuration.\n  **Implementation guidance**\n  - Populate with valid and existing field identifiers as defined in the NetSuite schema or metadata.\n  - Verify that all referenced fields are accessible and correctly scoped within the current context.\n  - Use this property to manage dependencies critical for data validation, synchronization, or processing logic.\n  - Keep the list updated to reflect any schema changes to avoid broken references or inconsistencies.\n  - Avoid circular references by carefully managing dependencies between fields.\n  **Examples**\n  - [\"customerId\", \"orderDate\", \"shippingAddress\"]\n  - [\"invoiceNumber\", \"paymentStatus\"]\n  - [\"productCode\", \"inventoryLevel\", \"reorderThreshold\"]\n  **Important notes**\n  - Referenced fields must be unique within the list to prevent redundancy and confusion.\n  - Modifications to referenced fields can impact dependent processes; changes should be tested thoroughly.\n  - This property contains only the identifiers (names or keys) of fields, not their actual data or values.\n  - Proper documentation of referenced fields improves maintainability and clarity of dependencies.\n  **Dependency chain**\n  - Often linked with fields that require validation or data aggregation from other fields.\n  - May influence or be influenced by business rules, workflows, or automation scripts that depend on multiple fields.\n  - Changes in referenced fields can cascade to affect dependent fields or processes.\n  **Technical details**\n  - Data type: Array of strings.\n  - Each string represents a unique field identifier within the NetSuite distributed environment.\n  - The array should be serialized in a format compatible with the consuming system (e.g., JSON array).\n  - Maximum length and allowed characters for field identifiers should conform to NetSuite naming conventions."},"relatedLists":{"type":"object","description":"relatedLists: A collection of related list objects that represent associated records or entities linked to the primary record within the NetSuite distributed data model. These related lists provide contextual information and enable navigation to connected data, facilitating comprehensive data retrieval and management. Each related list encapsulates a set of records that share a defined relationship with the primary record, such as transactions, contacts, or custom entities, thereby supporting a holistic view of the data ecosystem.\n\n**Field behavior**\n- Contains multiple related list entries, each representing a distinct association to the primary record.\n- Enables retrieval of linked records such as transactions, custom records, subsidiary data, or other relevant entities.\n- Supports hierarchical or relational data structures by referencing related entities, allowing nested or multi-level associations.\n- Typically read-only in the context of distributed data retrieval but may support updates or synchronization depending on API capabilities and permissions.\n- May include metadata such as record counts, last updated timestamps, or status indicators for each related list.\n- Supports dynamic inclusion or exclusion based on user permissions, record type, and system configuration.\n\n**Implementation guidance**\n- Populate with relevant related list objects that are directly associated with the primary record, ensuring accurate representation of relationships.\n- Ensure each related list entry includes unique identifiers, descriptive metadata, and navigation links or references necessary for data access and traversal.\n- Maintain consistency in naming conventions, data structures, and field formats to align with NetSuite’s standard data model and API specifications.\n- Implement pagination, filtering, or sorting mechanisms to efficiently handle large sets of related records within each list.\n- Validate all references and links to ensure data integrity, preventing broken or stale connections within the distributed data environment.\n- Consider caching strategies or incremental updates to optimize performance when dealing with frequently accessed related lists.\n- Respect and enforce access control and permission checks to ensure users only see related lists they are authorized to access.\n\n**Examples**\n- A customer record’s relatedLists might include “Transactions” (e.g., sales orders, invoices), “Contacts” (associated individuals), and “Cases” (customer support tickets).\n- An invoice record’s relatedLists could contain “Payments” (payment records), “Shipments” (delivery details), and “Adjustments” (billing corrections).\n- A custom record type might have relatedLists such as “Attachments” (files linked to the record) or “Notes” (user comments or annotations).\n- A vendor record’s relatedLists may include “Purchase Orders,” “Bills,” and “Vendor Contacts"},"forceReload":{"type":"boolean","description":"forceReload: Indicates whether the system should forcibly reload the data or configuration, bypassing any cached or stored versions to ensure the most up-to-date information is used. This flag is critical in scenarios where data accuracy and freshness are paramount, such as after configuration changes or data updates that must be immediately reflected.\n\n**Field behavior:**\n- When set to true, the system bypasses all caches and reloads data or configurations directly from the primary source, ensuring the latest state is retrieved.\n- When set to false or omitted, the system may utilize cached or previously stored data to optimize performance and reduce load times.\n- Primarily used in contexts where stale data could lead to errors, inconsistencies, or outdated processing results.\n- The reload operation triggered by this flag typically involves invalidating caches and refreshing dependent components or services.\n\n**Implementation guidance:**\n- Use this flag judiciously to balance between data freshness and system performance, avoiding unnecessary reloads that could degrade responsiveness.\n- Ensure that enabling forceReload initiates a comprehensive refresh cycle, including clearing relevant caches and reinitializing configuration or data layers.\n- Implement robust error handling during the reload process to manage potential failures without causing system downtime or inconsistent states.\n- Monitor system resource utilization and response times when forceReload is active to identify and mitigate performance bottlenecks.\n- Document scenarios and triggers for using forceReload to guide developers and operators in its appropriate application.\n\n**Examples:**\n- forceReload: true  \n  (forces the system to bypass caches and reload data/configuration from the authoritative source immediately)\n- forceReload: false  \n  (allows the system to serve data from cache if available, improving response time)\n- forceReload omitted  \n  (defaults to false behavior, relying on cached data unless otherwise specified)\n\n**Important notes:**\n- Excessive or unnecessary use of forceReload can lead to increased latency, higher resource consumption, and potential service degradation.\n- This flag does not validate the correctness or integrity of the source data; it only ensures the latest available data is fetched.\n- Downstream systems or processes should be designed to handle the potential delays or transient states caused by forced reloads.\n- Coordination with cache invalidation policies and data synchronization mechanisms is essential to maintain overall system consistency.\n\n**Dependency chain:**\n- Relies on underlying cache management and invalidation frameworks to effectively bypass stored data.\n- Interacts with data retrieval modules, configuration loaders, and possibly distributed synchronization services.\n- May trigger logging, monitoring, or"},"ioEnvironment":{"type":"string","description":"ioEnvironment specifies the input/output environment configuration for the NetSuite distributed system, defining how data is handled, processed, and routed across different operational environments. This property determines the context in which I/O operations occur, influencing data flow, security protocols, performance characteristics, and consistency guarantees within the distributed architecture.\n\n**Field behavior**\n- Determines the operational context for all input/output processes within the distributed NetSuite system.\n- Influences how data is read from and written to various storage systems, message queues, or communication channels.\n- Affects performance tuning, security measures, and data consistency mechanisms based on the selected environment.\n- Typically set during system initialization or configuration phases and remains stable during runtime to ensure predictable behavior.\n- May trigger environment-specific logging, monitoring, and error-handling strategies.\n\n**Implementation guidance**\n- Validate the ioEnvironment value against a predefined set of supported environments such as \"development,\" \"staging,\" \"production,\" and any custom configurations.\n- Ensure that the selected environment is compatible with other system settings related to data handling, network communication, and security policies.\n- Implement robust error handling and fallback mechanisms to manage unsupported or invalid environment values gracefully.\n- Clearly document the operational implications, limitations, and recommended use cases for each environment option to guide system administrators and developers.\n- Coordinate environment settings across all distributed nodes to maintain consistency and prevent configuration drift.\n\n**Examples**\n- \"development\" — used for local testing and debugging with relaxed security and simplified data handling.\n- \"staging\" — a pre-production environment that closely mirrors production settings for validation and testing.\n- \"production\" — the live environment optimized for security, performance, and data integrity.\n- \"custom\" — user-defined environment configurations tailored for specialized I/O requirements or experimental setups.\n\n**Important notes**\n- Changing the ioEnvironment typically requires restarting services or reinitializing connections to apply new configurations.\n- The environment setting directly impacts data integrity, access controls, and compliance with security policies.\n- Sensitive data must be handled according to the security standards appropriate for the selected environment.\n- Consistency across all distributed nodes is critical; all nodes should be configured with compatible ioEnvironment values to avoid data inconsistencies or communication failures.\n- Misconfiguration can lead to degraded performance, security vulnerabilities, or data loss.\n\n**Dependency chain**\n- Depends on system initialization and configuration management components.\n- Interacts with data storage modules, network communication layers, and security frameworks.\n- Influences logging, monitoring, and error"},"ioDomain":{"type":"string","description":"ioDomain specifies the Internet domain name used for input/output operations within the distributed NetSuite environment. This domain is critical for routing data requests and responses between distributed components and services, ensuring seamless communication and integration across the system.\n\n**Field behavior**\n- Defines the domain name utilized for network communication in distributed NetSuite environments.\n- Serves as the base domain for constructing URLs for API calls, data synchronization, and service endpoints.\n- Must be a valid, fully qualified domain name (FQDN) adhering to DNS standards.\n- Typically remains consistent within a deployment environment but can differ across environments such as development, staging, and production.\n- Influences routing, load balancing, and failover mechanisms within distributed services.\n\n**Implementation guidance**\n- Verify that the domain is properly configured in DNS and is resolvable by all distributed components.\n- Validate the domain format against standard domain naming conventions (e.g., RFC 1035).\n- Ensure the domain supports secure communication protocols (e.g., HTTPS with valid SSL/TLS certificates).\n- Coordinate updates to ioDomain with network, security, and operations teams to maintain service continuity.\n- When migrating or scaling services, update ioDomain accordingly and propagate changes to all dependent components.\n- Monitor domain accessibility and performance to detect and resolve connectivity issues promptly.\n\n**Examples**\n- \"api.netsuite.com\"\n- \"distributed-services.companydomain.com\"\n- \"staging-netsuite.io.company.com\"\n- \"eu-west-1.api.netsuite.com\"\n- \"dev-networks.internal.company.com\"\n\n**Important notes**\n- Incorrect or misconfigured ioDomain values can cause failed network requests, service interruptions, and data synchronization errors.\n- The domain must support necessary security certificates to enable encrypted communication and protect data in transit.\n- Changes to ioDomain may necessitate updates to firewall rules, proxy configurations, and network security policies.\n- Consistency in ioDomain usage across distributed components is essential to avoid routing conflicts and authentication issues.\n- Consider the impact on caching, CDN configurations, and DNS propagation delays when changing ioDomain.\n\n**Dependency chain**\n- Dependent on underlying network infrastructure, DNS setup, and domain registration.\n- Utilized by distributed service components for constructing communication endpoints.\n- May affect authentication and authorization workflows that rely on domain validation or origin verification.\n- Interacts with security components such as SSL/TLS certificate management and firewall configurations.\n- Influences monitoring, logging, and troubleshooting processes related to network communication."},"lastSyncedDate":{"type":"string","format":"date-time","description":"lastSyncedDate represents the precise date and time when the data was last successfully synchronized between the system and NetSuite. This timestamp is essential for monitoring the freshness, consistency, and integrity of synchronized data, enabling systems to determine whether updates or incremental syncs are necessary.\n\n**Field behavior**\n- Captures the exact date and time of the most recent successful synchronization event.\n- Automatically updates only after a sync operation completes successfully without errors.\n- Serves as a reference point to assess if data is current or requires refreshing.\n- Typically stored and transmitted in ISO 8601 format to maintain uniformity across different systems and platforms.\n- Does not reflect the start time or duration of the synchronization process, only its successful completion.\n\n**Implementation guidance**\n- Record the timestamp in Coordinated Universal Time (UTC) to prevent timezone-related inconsistencies.\n- Update this field exclusively after confirming a successful synchronization to avoid misleading data states.\n- Validate the date and time format rigorously to comply with ISO 8601 standards (e.g., \"YYYY-MM-DDTHH:mm:ssZ\").\n- Utilize this timestamp to drive incremental synchronization logic, data refresh triggers, or audit trails in downstream workflows.\n- Handle cases where the field may be null or missing, indicating that no synchronization has occurred yet.\n\n**Examples**\n- \"2024-06-15T14:30:00Z\"\n- \"2023-12-01T08:45:22Z\"\n- \"2024-01-10T23:59:59Z\"\n\n**Important notes**\n- This timestamp marks the completion of synchronization, not its initiation.\n- Do not update this field if the synchronization process fails or is incomplete.\n- Maintaining timezone consistency (UTC) is critical to avoid synchronization conflicts or data mismatches.\n- The field may be null or omitted if synchronization has never been performed.\n- Systems relying on this field should implement fallback or error handling for missing or invalid timestamps.\n\n**Dependency chain**\n- Depends on successful completion of the synchronization process between the system and NetSuite.\n- Influences downstream processes such as incremental sync triggers, data validation, and audit logging.\n- May be referenced by monitoring or alerting systems to detect synchronization delays or failures.\n\n**Technical details**\n- Stored as a string in ISO 8601 format with UTC timezone designator (e.g., \"YYYY-MM-DDTHH:mm:ssZ\").\n- Should be generated programmatically at the moment synchronization completes successfully"},"settings":{"type":"object","description":"settings: >\n  Configuration settings specific to the distributed module within the NetSuite integration, enabling fine-grained control over distributed processing behavior and performance optimization.\n  **Field behavior**\n  - Encapsulates a collection of key-value pairs representing various configuration parameters that govern the distributed NetSuite integration’s operation.\n  - Includes toggles (boolean flags), numeric thresholds, timeouts, batch sizes, logging options, and other customizable settings relevant to distributed processing workflows.\n  - Typically optional for basic usage but essential for advanced customization, performance tuning, and adapting the integration to specific deployment environments.\n  - Changes to these settings can dynamically alter the integration’s behavior, such as retry logic, concurrency limits, and error handling strategies.\n  **Implementation guidance**\n  - Define each setting with a clear, descriptive key name and an appropriate data type (e.g., integer, boolean, string).\n  - Validate input values rigorously to ensure they fall within acceptable ranges or conform to expected formats to prevent runtime errors.\n  - Provide sensible default values for all settings to maintain stable and predictable integration behavior when explicit configuration is absent.\n  - Document each setting comprehensively, including its purpose, valid values, default, and impact on the integration’s operation.\n  - Consider versioning or schema validation to manage changes in settings structure over time.\n  - Ensure that sensitive information is either excluded or securely handled if included within settings.\n  **Examples**\n  - `{ \"retryCount\": 3, \"enableLogging\": true, \"timeoutSeconds\": 120 }` — configures retry attempts, enables detailed logging, and sets operation timeout.\n  - `{ \"batchSize\": 50, \"useNonProduction\": false }` — sets the number of records processed per batch and specifies production environment usage.\n  - `{ \"maxConcurrentJobs\": 10, \"errorThreshold\": 5, \"logLevel\": \"DEBUG\" }` — limits concurrent jobs, sets error tolerance, and defines logging verbosity.\n  **Important notes**\n  - Modifications to settings may require restarting or reinitializing the integration service to apply changes effectively.\n  - Incorrect or suboptimal configuration can cause integration failures, data inconsistencies, or degraded performance.\n  - Avoid storing sensitive credentials or secrets in settings unless encrypted or otherwise secured.\n  - Settings should be managed carefully in multi-environment deployments to prevent configuration drift.\n  **Dependency chain**\n  - Dependent on the overall NetSuite integration configuration and"},"useSS2Framework":{"type":"boolean","description":"useSS2Framework indicates whether to utilize the SuiteScript 2.0 framework for the NetSuite distributed configuration, enabling modern scripting capabilities and modular architecture within the NetSuite environment.\n\n**Field behavior**\n- Determines if the SuiteScript 2.0 (SS2) framework is enabled for the NetSuite integration.\n- When set to true, the system uses SS2 APIs, modular script definitions, and updated scripting conventions.\n- When set to false or omitted, the system defaults to using SuiteScript 1.0 or legacy frameworks.\n- Influences script loading mechanisms, module resolution, and API compatibility within NetSuite.\n- Impacts debugging, deployment, and maintenance processes due to differences in framework structure.\n\n**Implementation guidance**\n- Set this property to true to leverage modern SuiteScript 2.0 features such as improved modularity, asynchronous processing, and enhanced performance.\n- Verify that all custom scripts, modules, and third-party integrations are fully compatible with SuiteScript 2.0 before enabling this flag.\n- Conduct thorough testing in a non-production or development environment to identify potential issues arising from framework changes.\n- Use this flag to facilitate gradual migration from legacy SuiteScript 1.0 to SuiteScript 2.0, allowing toggling between frameworks during transition phases.\n- Update deployment pipelines and CI/CD processes to accommodate SuiteScript 2.0 packaging and module formats.\n\n**Examples**\n- `useSS2Framework: true` — Enables SuiteScript 2.0 framework usage, activating modern scripting features.\n- `useSS2Framework: false` — Disables SuiteScript 2.0, falling back to legacy SuiteScript 1.0 framework.\n- Property omitted — Defaults to legacy SuiteScript framework (typically 1.0), maintaining backward compatibility.\n\n**Important notes**\n- Enabling the SS2 framework may require refactoring existing scripts to comply with SuiteScript 2.0 syntax, including the use of define/require for module loading.\n- Some legacy APIs, global objects, and modules available in SuiteScript 1.0 may be deprecated or behave differently in SuiteScript 2.0.\n- Performance improvements and new features in SS2 may not be realized if scripts are not properly adapted.\n- Ensure that all scheduled scripts, workflows, and integrations are reviewed for compatibility to prevent runtime errors.\n- Documentation and developer training may be necessary to fully leverage SuiteScript 2.0 capabilities.\n\n**Dependency chain**\n- Dep"},"frameworkVersion":{"type":"object","properties":{"type":{"type":"string","description":"type: The type property specifies the category or classification of the framework version within the NetSuite distributed system. It defines the nature or role of the framework version, such as whether it is a major release, minor update, patch, or experimental build. This classification is essential for managing version control, deployment strategies, and compatibility assessments across the distributed environment.\n\n**Field behavior**\n- Determines how the framework version is identified, categorized, and processed within the system.\n- Influences compatibility checks, update mechanisms, and deployment workflows.\n- Enables filtering, sorting, and selection of framework versions based on their type.\n- Affects automated decision-making processes such as rollback, promotion, or deprecation of versions.\n\n**Implementation guidance**\n- Use standardized, predefined values or enumerations to represent different types (e.g., \"major\", \"minor\", \"patch\", \"experimental\") to ensure consistency.\n- Maintain uniform naming conventions and case sensitivity across all framework versions.\n- Implement validation logic to restrict the type property to allowed categories, preventing invalid or unsupported entries.\n- Document any custom or extended types clearly to avoid ambiguity.\n- Ensure that changes to the type property trigger appropriate notifications or logging for audit purposes.\n\n**Examples**\n- \"major\" — indicating a significant release that introduces new features or breaking changes.\n- \"minor\" — representing smaller updates that add enhancements or non-breaking improvements.\n- \"patch\" — for releases focused on bug fixes, security patches, or minor corrections.\n- \"experimental\" — denoting versions under testing, development, or not intended for production use.\n- \"deprecated\" — marking versions that are no longer supported or recommended for use.\n\n**Important notes**\n- The type value directly impacts deployment strategies, including automated rollouts and rollback procedures.\n- Accurate and consistent typing is critical for automated update systems and dependency management tools to function correctly.\n- Changes to the type property should be documented thoroughly and communicated to all relevant stakeholders to avoid confusion.\n- Misclassification can lead to improper handling of versions, potentially causing system instability or incompatibility.\n- The property should be reviewed regularly to align with evolving release management policies.\n\n**Dependency chain**\n- Depends on the version numbering scheme and release management policies defined in the frameworkVersion.\n- Interacts with deployment, update, and compatibility modules that rely on the type classification to determine appropriate actions.\n- Influences compatibility checks with other components and services within the NetSuite distributed environment.\n- May affect logging, monitoring, and alert"},"enum":{"type":"array","items":{"type":"object"},"description":"A list of predefined string values that represent the allowed versions of the framework in the NetSuite distributed environment. This enumeration restricts the frameworkVersion property to accept only specific, valid version identifiers, ensuring consistency and preventing invalid version usage across configurations and API interactions.\n\n**Field behavior**\n- Defines the complete set of permissible values for the frameworkVersion property.\n- Enforces validation by restricting inputs to only those versions listed in the enum.\n- Facilitates consistent version management across different components and services.\n- Typically implemented as an array of strings, where each string corresponds to a valid framework version identifier.\n- Serves as a source of truth for supported framework versions in the system.\n\n**Implementation guidance**\n- Populate the enum with all currently supported framework version strings, reflecting official releases.\n- Regularly update the enum to add new versions and deprecate obsolete ones in alignment with release cycles.\n- Use the enum to validate user inputs, API requests, and configuration files to prevent invalid or unsupported versions.\n- Integrate the enum values into UI elements such as dropdown menus or selection lists to guide users in choosing valid versions.\n- Ensure synchronization between the enum values and the system’s version recognition logic to avoid discrepancies.\n- Document any changes to the enum clearly to inform developers and users about version support updates.\n\n**Examples**\n- [\"1.0.0\", \"1.1.0\", \"2.0.0\"]\n- [\"v2023.1\", \"v2023.2\", \"v2024.1\"]\n- [\"stable\", \"beta\", \"alpha\"]\n- [\"release-2023Q2\", \"release-2023Q3\", \"release-2024Q1\"]\n\n**Important notes**\n- Enum values must exactly match the version identifiers recognized by the system, including case sensitivity.\n- Modifications to the enum (adding/removing versions) should be performed carefully to maintain backward compatibility.\n- The enum itself does not specify a default version; default version handling should be managed separately in the system.\n- Consistency in formatting and naming conventions of version strings within the enum is critical to avoid confusion.\n- The enum should be treated as authoritative for validation purposes and not overridden by external inputs.\n\n**Dependency chain**\n- Used by the frameworkVersion property to restrict allowed values.\n- Relied upon by validation logic in APIs and configuration parsers.\n- Integrated with UI components for version selection.\n- Maintained in coordination with the system’s version management"},"lowercase":{"type":"object","description":"lowercase: Specifies whether the framework version string should be converted to lowercase characters to ensure consistent casing across outputs and integrations.\n\n**Field behavior**\n- When set to `true`, the framework version string is converted entirely to lowercase characters.\n- When set to `false` or omitted, the framework version string retains its original casing as provided.\n- Influences how the framework version is displayed in logs, API responses, configuration files, or any output where the version string is used.\n- Does not modify the content or structure of the version string, only its letter casing.\n\n**Implementation guidance**\n- Accept only boolean values (`true` or `false`) for this property.\n- Perform the lowercase transformation after the framework version string is generated or retrieved but before it is output, stored, or transmitted.\n- Default behavior should be to preserve the original casing if this property is not explicitly set.\n- Use this property to maintain consistency in environments where case sensitivity affects processing or comparison of version strings.\n- Ensure that any caching or storage mechanisms reflect the transformed casing if this property is enabled.\n\n**Examples**\n- `true` — The framework version `\"V1.2.3\"` becomes `\"v1.2.3\"`.\n- `true` — The framework version `\"v1.2.3\"` remains `\"v1.2.3\"` (already lowercase).\n- `false` — The framework version `\"V1.2.3\"` remains `\"V1.2.3\"`.\n- Property omitted — The framework version string is output exactly as originally provided.\n\n**Important notes**\n- This property only affects letter casing; it does not alter the version string’s format, numeric values, or other characters.\n- Downstream systems or integrations that consume the version string should be verified to handle the casing appropriately.\n- Changing the casing may impact string equality checks or version comparisons if those are case-sensitive.\n- Consider the implications on logging, monitoring, or auditing systems that may rely on exact version string matches.\n\n**Dependency chain**\n- Depends on the presence of a valid framework version string to apply the transformation.\n- Should be evaluated after the framework version is fully constructed or retrieved.\n- May interact with other formatting or normalization properties related to the framework version.\n\n**Technical details**\n- Implemented as a boolean flag within the `frameworkVersion` configuration object.\n- Transformation typically uses standard string lowercase functions provided by the programming environment.\n- Should be applied consistently across"}},"description":"frameworkVersion: The specific version identifier of the software framework used within the NetSuite distributed environment. This version string is essential for tracking the exact iteration of the framework deployed, performing compatibility checks between distributed components, and ensuring consistency across the system. It typically follows semantic versioning or a similar structured versioning scheme to convey major, minor, and patch-level changes, including pre-release or build metadata when applicable.\n\n**Field behavior**\n- Represents the precise version of the software framework currently in use within the distributed environment.\n- Serves as a key reference for verifying compatibility between different components and services.\n- Facilitates debugging, support, and audit processes by clearly identifying the framework iteration.\n- Typically adheres to semantic versioning (e.g., MAJOR.MINOR.PATCH) or a comparable versioning format.\n- Remains stable and immutable once a deployment is finalized to ensure traceability.\n\n**Implementation guidance**\n- Maintain a consistent and standardized version string format, such as \"1.2.3\", \"v2.0.0\", or date-based versions like \"2024.06.01\".\n- Update this property promptly whenever the framework undergoes upgrades, patches, or significant changes.\n- Validate the version string against a predefined list of supported or recognized framework versions to prevent errors.\n- Integrate this property into deployment automation, monitoring, and logging tools to verify correct framework usage.\n- Avoid modifying the frameworkVersion post-deployment to maintain historical accuracy and supportability.\n\n**Examples**\n- \"1.0.0\"\n- \"2.3.5\"\n- \"v3.1.0-beta\"\n- \"2024.06.01\"\n- \"1.4.0-rc1\"\n\n**Important notes**\n- Accurate frameworkVersion values are critical to prevent compatibility issues and runtime failures in distributed systems.\n- Missing, incorrect, or inconsistent version identifiers can lead to deployment errors, integration problems, or difficult-to-trace bugs.\n- This property should be treated as a source of truth for framework versioning within the NetSuite distributed environment.\n- Coordination with other versioning properties (e.g., applicationVersion, apiVersion) is important for holistic version management.\n\n**Dependency chain**\n- Dependent on the overarching NetSuite distributed system versioning and release management strategy.\n- Closely related to other versioning properties such as applicationVersion and apiVersion for comprehensive compatibility checks.\n- Influences deployment pipelines, runtime environment validations, and compatibility enforcement"}},"description":"Indicates whether the transaction or record is distributed across multiple departments, locations, classes, or subsidiaries within the NetSuite system, allowing for detailed allocation of amounts or quantities for financial tracking and reporting purposes.\n\n**Field behavior**\n- Specifies if the transaction’s amounts or quantities are allocated across multiple organizational segments such as departments, locations, classes, or subsidiaries.\n- When set to `true`, the transaction supports detailed distribution, enabling granular financial analysis and reporting.\n- When set to `false` or omitted, the transaction is treated as assigned to a single segment without any distribution.\n- Influences how the transaction data is processed, posted, and reported within NetSuite’s financial modules.\n\n**Implementation guidance**\n- Use this boolean field to indicate whether a transaction involves distributed allocations.\n- When `distributed` is `true`, ensure that corresponding distribution details (e.g., department, location, class, subsidiary allocations) are provided in related fields to fully define the distribution.\n- Validate that the sum of all distributed amounts or quantities equals the total transaction amount to maintain data integrity.\n- Confirm that the specific NetSuite record type supports distribution before setting this field to `true`.\n- Handle this field carefully in integrations to avoid discrepancies in accounting or reporting.\n\n**Examples**\n- `distributed: true` — The transaction amounts are allocated across multiple departments and locations.\n- `distributed: false` — The transaction is assigned to a single department without any distribution.\n- Omitted `distributed` field — Defaults to non-distributed transaction behavior.\n\n**Important notes**\n- Enabling distribution (`distributed: true`) often requires additional detailed data to specify how amounts are allocated.\n- Not all transaction or record types in NetSuite support distribution; verify compatibility beforehand.\n- Incorrect or incomplete distribution data can lead to accounting errors or integration failures.\n- Distribution affects financial reporting and posting; ensure consistency across related fields.\n\n**Dependency chain**\n- Commonly used alongside fields specifying distribution details such as `department`, `location`, `class`, and `subsidiary`.\n- May impact related posting, reporting, and reconciliation processes within NetSuite’s financial modules.\n- Dependent on the transaction type’s capability to support distributed allocations.\n\n**Technical details**\n- Data type: Boolean (`true` or `false`).\n- Default behavior when omitted is typically `false` (non-distributed).\n- Must be synchronized with distribution detail records to ensure accurate financial data.\n- Changes to this field may trigger validation or"},"getList":{"type":"object","properties":{"type":{"type":"array","items":{"type":"object"},"description":"type: >\n  Specifies the category or classification of the records to be retrieved from the NetSuite system.\n  This property determines the type of entities that the getList operation will query and return.\n  It defines the scope of the data retrieval by indicating which NetSuite record type the API should target.\n  **Field behavior**\n  - Defines the specific record type to fetch, such as customers, transactions, or items.\n  - Influences the structure, fields, and format of the returned data based on the selected record type.\n  - Must be set to a valid NetSuite record type identifier recognized by the API.\n  - Directly impacts the filtering, sorting, and pagination capabilities available for the query.\n  **Implementation guidance**\n  - Use predefined constants or enumerations representing NetSuite record types to avoid errors and ensure consistency.\n  - Validate the type value before making the API call to confirm it corresponds to a supported and accessible record type.\n  - Consider user permissions and roles associated with the record type to ensure the API caller has appropriate access rights.\n  - Review NetSuite documentation for the exact record type identifiers and their expected behaviors.\n  - When possible, test with sample queries to verify the returned data matches expectations for the specified type.\n  **Examples**\n  - \"customer\"\n  - \"salesOrder\"\n  - \"inventoryItem\"\n  - \"employee\"\n  - \"vendor\"\n  - \"purchaseOrder\"\n  **Important notes**\n  - Incorrect or unsupported type values will result in API errors, empty responses, or unexpected data structures.\n  - The type property directly affects query performance, response size, and the complexity of the returned data.\n  - Some record types may require additional filters, parameters, or specific permissions to retrieve meaningful or complete data.\n  - Changes in NetSuite schema or API versions may introduce new record types or deprecate existing ones; keep the type values up to date.\n  **Dependency chain**\n  - The 'type' property is a required input for the NetSuite.getList operation.\n  - The value of 'type' determines the schema, fields, and structure of the records returned in the response.\n  - Other properties, filters, or parameters in the getList operation may depend on or vary according to the specified 'type'.\n  - Validation and error handling mechanisms rely on the correctness of the 'type' value.\n  **Technical details**\n  - Accepts string values corresponding"},"typeId":{"type":"string","description":"typeId: >\n  The unique identifier representing the specific type of record or entity to be retrieved in the NetSuite getList operation.\n  **Field behavior**\n  - Specifies the category or type of records to fetch from NetSuite.\n  - Determines the schema and fields available in the returned records.\n  - Must correspond to a valid NetSuite record type identifier.\n  **Implementation guidance**\n  - Use predefined NetSuite record type IDs as per NetSuite documentation.\n  - Validate the typeId before making the API call to avoid errors.\n  - Ensure the typeId aligns with the permissions and roles of the API user.\n  **Examples**\n  - \"customer\" for customer records.\n  - \"salesOrder\" for sales order records.\n  - \"employee\" for employee records.\n  **Important notes**\n  - Incorrect or unsupported typeId values will result in API errors.\n  - The typeId is case-sensitive and must match NetSuite's expected values.\n  - Changes in NetSuite's API or record types may affect valid typeId values.\n  **Dependency chain**\n  - Depends on the NetSuite record types supported by the account.\n  - Influences the structure and content of the getList response.\n  **Technical details**\n  - Typically a string value representing the internal NetSuite record type.\n  - Used as a parameter in the getList API endpoint to filter records.\n  - Must be URL-encoded if used in query parameters."},"internalId":{"type":"string","description":"Unique identifier assigned internally to an entity within the NetSuite system. This identifier is used to reference and retrieve specific records programmatically.\n\n**Field behavior**\n- Serves as the primary key for identifying records in NetSuite.\n- Used in API calls to fetch, update, or delete specific records.\n- Immutable once assigned to a record.\n- Typically a numeric or alphanumeric string.\n\n**Implementation guidance**\n- Must be provided when performing operations that require precise record identification.\n- Should be validated to ensure it corresponds to an existing record before use.\n- Avoid exposing internalId values in public-facing contexts to maintain data security.\n- Use in conjunction with other identifiers if necessary to ensure correct record targeting.\n\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"A1B2C3\"\n\n**Important notes**\n- internalId is unique within the scope of the record type.\n- Different record types may have overlapping internalId values; always confirm the record type context.\n- Not user-editable; assigned by NetSuite upon record creation.\n- Essential for batch operations where multiple records are processed by their internalIds.\n\n**Dependency chain**\n- Dependent on the record type being accessed or manipulated.\n- Used alongside other API parameters such as record type or externalId for comprehensive identification.\n\n**Technical details**\n- Data type: string (often numeric but can include alphanumeric characters).\n- Read-only from the API consumer perspective.\n- Returned in API responses when querying records.\n- Used as a key parameter in getList, get, update, and delete API operations."},"externalId":{"type":"string","description":"A unique identifier assigned to an entity or record by an external system, used to reference or synchronize data between systems.\n\n**Field behavior**\n- Serves as a unique key to identify records originating outside the current system.\n- Used to retrieve, update, or synchronize records with external systems.\n- Typically immutable once assigned to maintain consistent references.\n- May be optional or required depending on the integration context.\n\n**Implementation guidance**\n- Ensure the externalId is unique within the scope of the external system.\n- Validate the format and length according to the external system’s specifications.\n- Use this field to map or link records between the local system and external sources.\n- Handle cases where the externalId might be missing or duplicated gracefully.\n- Document the source system and context for the externalId to avoid ambiguity.\n\n**Examples**\n- \"INV-12345\" (Invoice number from an accounting system)\n- \"CRM-987654321\" (Customer ID from a CRM platform)\n- \"EXT-USER-001\" (User identifier from an external user management system)\n\n**Important notes**\n- The externalId is distinct from the internal system’s primary key or record ID.\n- Changes to the externalId can disrupt synchronization and should be avoided.\n- When integrating multiple external systems, ensure externalIds are namespaced or otherwise differentiated.\n- Not all records may have an externalId if they originate solely within the local system.\n\n**Dependency chain**\n- Often used in conjunction with other identifiers like internal IDs or system-specific keys.\n- May depend on authentication or authorization to access external system data.\n- Relies on consistent data synchronization processes to maintain accuracy.\n\n**Technical details**\n- Typically represented as a string data type.\n- May include alphanumeric characters, dashes, or underscores.\n- Should be indexed in databases for efficient lookup.\n- May require encoding or escaping if used in URLs or queries."},"_id":{"type":"object","description":"_id: The unique identifier for a record within the NetSuite system. This identifier is used to retrieve, update, or reference specific records in API operations.\n**Field behavior**\n- Serves as the primary key for records in NetSuite.\n- Must be unique within the context of the record type.\n- Used to fetch or manipulate specific records via API calls.\n- Immutable once assigned to a record.\n**Implementation guidance**\n- Ensure the _id is correctly captured from NetSuite responses when retrieving records.\n- Validate the _id format as per NetSuite’s specifications before using it in requests.\n- Use the _id to perform precise operations such as updates or deletions.\n- Handle cases where the _id may not be present or is invalid gracefully.\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"987654321\"\n**Important notes**\n- The _id is critical for identifying records uniquely; incorrect usage can lead to data inconsistencies.\n- Do not generate or alter _id values manually; always use those provided by NetSuite.\n- The _id is typically a numeric string but confirm with the specific NetSuite record type.\n**Dependency chain**\n- Dependent on the record type being accessed or manipulated.\n- Used in conjunction with other record fields for comprehensive data operations.\n**Technical details**\n- Typically represented as a string or numeric value.\n- Returned in API responses when listing or retrieving records.\n- Required in API requests for operations targeting specific records."}},"description":"getList: Retrieves a list of records from the NetSuite system based on specified criteria and parameters. This operation enables fetching multiple records in a single request, optimizing data retrieval and processing efficiency. It supports filtering, sorting, and pagination to manage large datasets effectively, and returns both the matching records and relevant metadata about the query results.\n\n**Field behavior**\n- Accepts parameters defining the record type to retrieve, along with optional filters, search criteria, and sorting options.\n- Returns a collection (list) of records that match the specified criteria.\n- Supports pagination by allowing clients to specify limits and offsets or use tokens to navigate through large result sets.\n- Includes metadata such as total record count, current page number, and page size to facilitate client-side data handling.\n- May return partial data if the dataset exceeds the maximum allowed records per request.\n- Handles cases where no records match the criteria by returning an empty list with appropriate metadata.\n\n**Implementation guidance**\n- Require explicit specification of the record type to ensure accurate data retrieval.\n- Validate and sanitize all input parameters (filters, sorting, pagination) to prevent malformed queries and optimize performance.\n- Implement robust pagination logic to allow clients to retrieve subsequent pages seamlessly.\n- Provide clear error messages and status codes for scenarios such as invalid parameters, unauthorized access, or record type not found.\n- Ensure compliance with NetSuite API rate limits and handle throttling gracefully.\n- Support common filter operators (e.g., equals, contains, greater than) consistent with NetSuite’s search capabilities.\n- Return consistent and well-structured response formats to facilitate client parsing and integration.\n\n**Examples**\n- Retrieving a list of customer records filtered by status (e.g., Active) and creation date range.\n- Fetching a batch of sales orders placed within a specific date range, sorted by order date descending.\n- Obtaining a paginated list of inventory items filtered by category and availability status.\n- Requesting the first 50 employee records with a specific job title, then fetching subsequent pages as needed.\n- Searching for vendor records containing a specific keyword in their name or description.\n\n**Important notes**\n- The maximum number of records returned per request is subject to NetSuite API limits, which may require multiple paginated requests for large datasets.\n- Proper authentication and authorization are mandatory to access the requested records; insufficient permissions will result in access errors.\n- The structure and fields of the returned records vary depending on the specified record type and the fields requested or defaulted"},"searchPreferences":{"type":"object","properties":{"bodyFieldsOnly":{"type":"boolean","description":"bodyFieldsOnly indicates whether the search results should include only the body fields of the records, excluding any joined or related record fields. This setting controls the scope of data returned by the search operation, allowing for more focused and efficient retrieval when only the main record's fields are necessary.\n\n**Field behavior**\n- When set to true, search results will include only the fields that belong directly to the main record (body fields), excluding any fields from joined or related records.\n- When set to false or omitted, search results may include fields from both the main record and any joined or related records specified in the search.\n- Directly affects the volume and detail of data returned, potentially reducing payload size and improving performance.\n- Influences how the search engine processes and compiles the result set, limiting it to primary record data when enabled.\n\n**Implementation guidance**\n- Use this property to optimize search performance and reduce data transfer when only the main record's fields are required.\n- Set to true to minimize payload size, which is beneficial for large datasets or bandwidth-sensitive environments.\n- Verify that your search criteria and downstream processing do not require any joined or related record fields before enabling this option.\n- If joined fields are necessary for your application logic, keep this property false or unset to ensure complete data retrieval.\n- Consider this setting in conjunction with other search preferences like pageSize and returnSearchColumns for optimal results.\n\n**Examples**\n- `bodyFieldsOnly: true` — returns only the main record’s body fields in search results, excluding any joined record fields.\n- `bodyFieldsOnly: false` — returns both body fields and fields from joined or related records as specified in the search.\n- Omitted `bodyFieldsOnly` property — defaults to false behavior, including joined fields if requested.\n\n**Important notes**\n- Enabling bodyFieldsOnly may omit critical related data if your search logic depends on joined fields, potentially impacting application functionality.\n- This setting is particularly useful for improving performance and reducing data size in scenarios where joined data is unnecessary.\n- Not all record types or search operations may support this preference; verify compatibility with your specific use case.\n- Changes to this setting can affect the structure and completeness of search results, so test thoroughly when modifying.\n\n**Dependency chain**\n- This property is part of the `searchPreferences` object within the NetSuite API request.\n- It influences the fields returned by the search operation, affecting both data scope and payload size.\n- May"},"pageSize":{"type":"number","description":"pageSize specifies the number of search results to be returned per page in a paginated search response. This property controls the size of each page of results when performing searches, enabling efficient handling and retrieval of large datasets by dividing them into manageable chunks. By adjusting pageSize, clients can balance between the volume of data received per request and the performance implications of processing large result sets.\n\n**Field behavior**\n- Determines the maximum number of records returned in a single page of search results.\n- Directly influences the pagination mechanism by setting how many items appear on each page.\n- Helps optimize network and client performance by limiting the amount of data transferred and processed per response.\n- Affects the total number of pages available, calculated based on the total number of search results divided by pageSize.\n- When pageSize is changed between requests, it may affect the consistency of pagination navigation.\n\n**Implementation guidance**\n- Assign pageSize a positive integer value that balances response payload size and system performance.\n- Ensure the value respects any minimum and maximum limits imposed by the API or backend system.\n- Maintain consistent pageSize values across paginated requests to provide predictable and stable navigation through result pages.\n- Consider client device capabilities, network bandwidth, and expected user interaction patterns when selecting pageSize.\n- Implement validation to prevent invalid or out-of-range values that could cause errors or degraded performance.\n- When dealing with very large datasets, consider smaller pageSize values to reduce memory consumption and improve responsiveness.\n\n**Examples**\n- pageSize: 25 — returns 25 search results per page, suitable for standard list views.\n- pageSize: 100 — returns 100 search results per page, useful for bulk data processing or export scenarios.\n- pageSize: 10 — returns 10 search results per page, ideal for quick previews or limited bandwidth environments.\n- pageSize: 50 — a moderate setting balancing data volume and performance for typical use cases.\n\n**Important notes**\n- Excessively large pageSize values can increase response times, memory usage, and may lead to timeouts or throttling.\n- Very small pageSize values can cause a high number of API calls, increasing overall latency and server load.\n- The API may enforce maximum allowable pageSize limits; requests exceeding these limits may result in errors or automatic truncation.\n- Changing pageSize mid-pagination can disrupt user experience by altering the number of pages and item offsets.\n- Some APIs may have default pageSize values if none is specified; explicitly setting page"},"returnSearchColumns":{"type":"boolean","description":"returnSearchColumns: Specifies whether the search operation should return the columns (fields) defined in the search results, providing detailed data for each record matching the search criteria.\n\n**Field behavior**\n- Determines if the search response includes the columns specified in the search definition, such as field values and metadata.\n- When set to true, the search results will contain detailed column data for each record, enabling comprehensive data retrieval.\n- When set to false, the search results will omit column data, potentially returning only record identifiers or minimal information.\n- Directly influences the amount of data returned, impacting response payload size and processing time.\n- Affects how client applications can utilize the search results, depending on the presence or absence of column data.\n\n**Implementation guidance**\n- Set to true when detailed search result data is required for processing, reporting, or display purposes.\n- Set to false to optimize performance and reduce bandwidth usage when only record IDs or minimal data are needed.\n- Use in conjunction with other search preference settings (e.g., `pageSize`, `returnSearchRows`) to fine-tune search responses.\n- Ensure client applications are designed to handle both scenarios—presence or absence of column data—to avoid errors or incomplete processing.\n- Consider the trade-off between data completeness and performance when configuring this property.\n\n**Examples**\n- `returnSearchColumns: true` — The search results will include all defined columns for each record, such as names, dates, and custom fields.\n- `returnSearchColumns: false` — The search results will exclude column data, returning only basic record information like internal IDs.\n\n**Important notes**\n- Enabling returnSearchColumns may significantly increase response size and processing time, especially for searches returning many records or columns.\n- Some search operations or API endpoints may require columns to be returned to function correctly or to provide meaningful results.\n- Disabling this option can improve performance but limits the detail available in search results, which may affect downstream processing or user interfaces.\n- Changes to this setting can impact caching, pagination, and sorting behaviors depending on the search implementation.\n\n**Dependency chain**\n- Related to other `searchPreferences` properties such as `pageSize` (controls number of records per page) and `returnSearchRows` (controls whether search rows are returned).\n- Works in tandem with search definition settings that specify which columns are included in the search.\n- May affect or be affected by API-level configurations or limitations on data retrieval and response formatting."}},"description":"searchPreferences: Preferences that control the behavior and parameters of search operations within the NetSuite environment, enabling customization of how search queries are executed and how results are returned to optimize relevance, performance, and user experience.\n\n**Field behavior**\n- Defines the execution parameters for search queries, including pagination, sorting, filtering, and result formatting.\n- Controls the scope, depth, and granularity of data retrieved during search operations.\n- Influences the performance, accuracy, and relevance of search results based on configured preferences.\n- Can be adjusted dynamically to tailor search behavior to specific user roles, contexts, or application requirements.\n- May include settings such as page size limits, sorting criteria, case sensitivity, and filter application.\n\n**Implementation guidance**\n- Utilize this property to fine-tune search operations to meet specific user or application needs, improving efficiency and relevance.\n- Validate all preference values against supported NetSuite search parameters to prevent errors or unexpected behavior.\n- Establish sensible default preferences to ensure consistent and predictable search results when explicit preferences are not provided.\n- Allow dynamic updates to preferences to adapt to changing contexts, such as different user roles or data volumes.\n- Ensure that preference configurations comply with user permissions and role-based access controls to maintain security and data integrity.\n\n**Examples**\n- Setting a page size of 50 to limit the number of records returned per search query for better performance.\n- Enabling case-insensitive search filters to broaden result matching.\n- Specifying sorting order by transaction date in descending order to show the most recent records first.\n- Applying filters to restrict search results to a particular customer segment or date range.\n- Configuring search to exclude inactive records to streamline results.\n\n**Important notes**\n- Misconfiguration of searchPreferences can lead to incomplete, irrelevant, or inefficient search results, negatively impacting user experience.\n- Certain preferences may be restricted or overridden based on user roles, permissions, or API version constraints.\n- Changes to searchPreferences can affect system performance; excessive page sizes or complex filters may increase load times.\n- Always verify compatibility of preference settings with the specific NetSuite API version and environment in use.\n- Consider the impact of preferences on downstream processes that consume search results.\n\n**Dependency chain**\n- Depends on the overall search operation configuration and the specific search type being performed.\n- Interacts with user authentication and authorization settings to enforce access controls on search results.\n- Influences and is influenced by data retrieval mechanisms and indexing strategies within NetSuite.\n- Works in conjunction with"},"file":{"type":"object","description":"Configuration for retrieving files from NetSuite file cabinet and PARSING them into records. Use this for structured file exports (CSV, XML, JSON) where the file content should be parsed into data records.\n\n**Critical:** When to use file vs blob\n- Use `netsuite.file` WITH export `type: null/undefined` for file exports WITH parsing (CSV, XML, JSON)\n- Use `netsuite.blob` WITH export `type: \"blob\"` for raw binary transfers WITHOUT parsing\n\nWhen you want file content to be parsed into individual records, use this `file` configuration and leave the export's `type` field as null or undefined (standard export). Do NOT set `type: \"blob\"` when using this configuration.","properties":{"folderInternalId":{"type":"string","description":"The internal ID of the NetSuite File Cabinet folder from which files will be exported.\n\nSpecify the internal ID for the NetSuite File Cabinet folder from which you want to export your files. If the folder internal ID is required to be dynamic based on the data you are integrating, you can specify the JSON path to the field in your data containing the folder internal ID values instead. For example, {{{myFileField.fileName}}}.\n\n**Field behavior**\n- Identifies the specific folder in NetSuite's file cabinet to export files from\n- Must be a valid internal ID that exists in the NetSuite environment\n- Supports dynamic values using handlebars notation for data-driven folder selection\n- The internal ID is distinct from folder names or paths; it's a stable numeric identifier\n\n**Implementation guidance**\n- Obtain the folderInternalId via NetSuite's UI (File Cabinet > folder properties) or API\n- For static exports, use the numeric internal ID directly (e.g., \"12345\")\n- For dynamic exports, use handlebars syntax to reference a field in your data\n- Verify folder permissions - the integration user must have access to the folder\n- The folderInternalId is distinct from folder names or folder paths; it is a stable numeric identifier used internally by NetSuite.\n- Using an incorrect or non-existent folderInternalId will result in errors or unintended file placement.\n- Folder permissions and user roles can impact the ability to perform operations even with a valid folderInternalId.\n- Folder hierarchy changes do not affect the folderInternalId, ensuring persistent reference integrity.\n\n**Dependency chain**\n- Depends on the existence of the folder within the NetSuite file cabinet.\n- Requires appropriate user permissions to access or modify the folder.\n- Often used in conjunction with file identifiers and other file metadata fields.\n- May be linked to folder creation or folder search operations to retrieve valid IDs.\n\n**Examples**\n- \"12345\" - Static folder internal ID\n- \"67890\" - Another valid folder internal ID\n- \"{{{record.folderId}}}\" - Dynamic folder ID from integration data\n- \"{{{myFileField.fileName}}}\" - Dynamic value from a field in your data\n\n**Important notes**\n- Using an incorrect or non-existent folderInternalId will result in errors\n- Folder hierarchy changes do not affect the folderInternalId\n- Internal IDs may differ between non-production and production environments"},"backupFolderInternalId":{"type":"string","description":"backupFolderInternalId is the internal identifier of the backup folder within the NetSuite file cabinet where backup files are stored. This ID uniquely identifies the folder location used for saving backup files programmatically, ensuring that backup operations target the correct directory within the NetSuite environment.\n\n**Field behavior**\n- Represents a unique internal ID assigned by NetSuite to a specific folder in the file cabinet.\n- Directs backup operations to the designated folder location for storing backup files.\n- Must correspond to an existing and accessible folder within the NetSuite file cabinet.\n- Typically handled as an integer value in API requests and responses.\n- Immutable for a given folder; changing the folder requires updating this ID accordingly.\n\n**Implementation guidance**\n- Verify that the folder with this internal ID exists before initiating backup operations.\n- Confirm that the folder has the necessary permissions to allow writing and managing backup files.\n- Use NetSuite SuiteScript APIs or REST API calls to retrieve and validate folder internal IDs dynamically.\n- Avoid hardcoding the internal ID; instead, use configuration files, environment variables, or administrative settings to maintain flexibility across environments.\n- Implement error handling to manage cases where the folder ID is invalid, missing, or inaccessible.\n- Consider environment-specific IDs for non-production versus production to prevent misdirected backups.\n\n**Examples**\n- 12345\n- 67890\n- 112233\n\n**Important notes**\n- The internal ID is unique per NetSuite account and environment; IDs do not transfer between non-production and production.\n- Deleting or renaming the folder associated with this ID will disrupt backup processes until updated.\n- Proper access rights and permissions are mandatory to write backup files to the specified folder.\n- Changes to folder structure or permissions should be coordinated with backup scheduling to avoid failures.\n- This property is critical for ensuring backup data integrity and recoverability within NetSuite.\n\n**Dependency chain**\n- Depends on the existence and accessibility of the folder in the NetSuite file cabinet.\n- Interacts with backup scheduling, file naming conventions, and storage management properties.\n- May be linked with authentication and authorization mechanisms controlling file cabinet access.\n- Relies on NetSuite API capabilities to manage and reference file cabinet folders.\n\n**Technical details**\n- Data type: Integer (typically a 32-bit integer).\n- Represents the internal NetSuite folder ID, not the folder name or path.\n- Used in API payloads to specify backup destination folder.\n- Must be retrieved or confirmed via NetSuite SuiteScript"}}}},"required":[]},"RDBMS-2":{"type":"object","description":"Configuration object for Relational Database Management System (RDBMS) data integration exports.\n\nThis object is REQUIRED when the _connectionId field references an RDBMS database connection\nand must not be included for other connection types. It defines how data is extracted\nfrom relational databases using SQL queries.\n\n**Rdbms export capabilities**\n- Execute custom SQL SELECT statements\n- Support for joins, aggregations, and functions\n- Flexible data retrieval from any accessible tables or views\n- Compatible with all major database systems\n\n**Critical:** WHAT BELONGS IN THIS OBJECT\n- `query` - **ALWAYS REQUIRED** - The SQL SELECT statement\n- `once` - **REQUIRED** when the export's Object Type is `\"once\"` (set _include_once: true)\n- **DO NOT** put `delta` inside this object - delta is handled via the query\n\n**Delta exports (type: \"delta\")**\nFor delta/incremental exports, do NOT populate a `delta` object inside `rdbms`.\nInstead, use `{{lastExportDateTime}}` or `{{currentExportDateTime}}` directly in the query:\n```json\n{\n  \"type\": \"delta\",\n  \"rdbms\": {\n    \"query\": \"SELECT * FROM customers WHERE updatedAt > {{lastExportDateTime}}\"\n  }\n}\n```\n\n**Once exports (type: \"once\")**\nFor once exports (mark records as processed), populate `rdbms.once.query`:\n```json\n{\n  \"type\": \"once\",\n  \"rdbms\": {\n    \"query\": \"SELECT * FROM customers WHERE exported = false\",\n    \"once\": {\n      \"query\": \"UPDATE customers SET exported = true WHERE id = {{record.id}}\"\n    }\n  }\n}\n```\n\n**Standard exports (type: null or not specified)**\nJust provide the query:\n```json\n{\n  \"rdbms\": {\n    \"query\": \"SELECT * FROM customers WHERE status = 'ACTIVE'\"\n  }\n}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - SQL SELECT query for retrieving data from the database.\n\nThis field contains the SQL SELECT statement that will be executed to fetch data\nfrom the database. The query can range from simple table selections to complex\njoins and aggregations.\n\nExamples:\n- Basic: `SELECT id, name, email FROM customers WHERE status = 'ACTIVE'`\n- Join: `SELECT o.id, c.name, o.amount FROM orders o JOIN customers c ON o.customer_id = c.id`\n- Aggregate: `SELECT category, COUNT(*) as count FROM orders GROUP BY category`\n- Parameterized: `SELECT * FROM orders WHERE customer_id = {{record.customer_id}}`\n"},"once":{"type":"object","description":"**CRITICAL: REQUIRED when the export's Object Type is \"once\".**\n\nIf Object Type is \"once\", you MUST set _include_once to true (or include this object).\n\nThis object has ONLY ONE property: \"query\" (a SQL UPDATE string).\nDO NOT create any other properties like \"update\", \"table\", \"set\", \"where\", etc.\n\nCORRECT format:\n```json\n{\"query\": \"UPDATE customers SET exported=true WHERE id={{record.id}}\"}\n```\n\nWRONG format (DO NOT DO THIS):\n```json\n{\"update\": {\"table\": \"customers\", \"set\": {...}}}\n```\n","properties":{"query":{"type":"string","description":"**REQUIRED** - A SQL UPDATE statement string to mark records as processed.\n\nThis is a plain SQL UPDATE query string, NOT a structured object.\n\nThe query runs AFTER each record is successfully exported, setting a flag\nto indicate the record has been processed.\n\nFormat: \"UPDATE <table> SET <column>=<value> WHERE <id_column>={{record.<id_field>}}\"\n\nExample: \"UPDATE customers SET exported=true WHERE id={{record.id}}\"\n\nThe {{record.id}} placeholder is replaced with the actual record ID from each exported row.\n"}}}}},"S3-3":{"type":"object","description":"Configuration object for Amazon S3 (Simple Storage Service) data integration exports.\n\nThis object is REQUIRED when the _connectionId field references an AWS S3 connection\nand must not be included for other connection types. It defines how files are retrieved\nfrom S3 buckets for processing in integrations.\n\nThe S3 export object has the following requirements:\n\n- Required fields: region, bucket\n- Optional fields: keyStartsWith, keyEndsWith, backupBucket, keyPrefix\n\n**Purpose**\n\nThis configuration specifies:\n- Which S3 bucket to retrieve files from\n- How to filter files by key patterns\n- Where to move files after retrieval (optional)\n","properties":{"region":{"type":"string","description":"The AWS region where the S3 bucket is located.\n\n- REQUIRED for all S3 exports\n- Must be a valid AWS region identifier (e.g., us-east-1, eu-west-1)\n- Case-insensitive (will be normalized to lowercase)\n"},"bucket":{"type":"string","description":"The S3 bucket name to retrieve files from.\n\n- REQUIRED for all S3 exports\n- Must be a valid existing S3 bucket name\n- Globally unique across all AWS accounts\n- AWS credentials must have s3:ListBucket and s3:GetObject permissions\n"},"keyStartsWith":{"type":"string","description":"Optional prefix filter for S3 object keys.\n\n- Filters files based on the beginning of their keys\n- Functions as a directory path in S3's flat storage structure\n- Case-sensitive (S3 keys are case-sensitive)\n- Examples:\n  - `\"exports/\"` - retrieves files in the exports \"directory\"\n  - `\"customer/orders/2023/\"` - retrieves files in this nested path\n  - `\"invoice_\"` - retrieves files starting with \"invoice_\"\n\nWhen used with keyEndsWith, files must match both criteria.\n"},"keyEndsWith":{"type":"string","description":"Optional suffix filter for S3 object keys.\n\n- Commonly used to filter by file extension\n- Case-sensitive (S3 keys are case-sensitive)\n- Examples:\n  - `\".csv\"` - retrieves only CSV files\n  - `\"_FINAL.xml\"` - retrieves only XML files with _FINAL suffix\n  - `\"_READY\"` - retrieves files with status indicator\n\nWhen used with keyStartsWith, files must match both criteria.\n"},"backupBucket":{"type":"string","description":"Optional destination bucket where files are moved before deletion.\n\n- If omitted, files are deleted from the source bucket after successful export\n- Must be a valid existing S3 bucket in the same region\n- AWS credentials must have s3:PutObject permissions on this bucket\n- Provides an independent backup of exported files\n\nIMPORTANT: Celigo automatically deletes files from the source bucket after\nsuccessful export. The backup bucket is for users who want to maintain their\nown independent backup of exported files. Celigo also maintains its own backup\nof processed files for a set period of time.\n"},"keyPrefix":{"type":"string","description":"Optional prefix to prepend to keys when moving to backup bucket.\n\n- Used only when backupBucket is specified\n- Prepended to the original filename when moved to backup\n- Can contain static text or handlebars templates\n- Examples:\n  - `\"processed/\"` - places files under a processed folder\n  - `\"archive/{{date 'YYYY-MM-DD'}}/\"` - organizes by date\n\nIMPORTANT: The original file's directory structure is not preserved. Only the\nfilename is appended to this prefix in the backup location.\n"}}},"Wrapper-3":{"type":"object","description":"Configuration for Wrapper exports","properties":{"function":{"type":"string","description":"Function name to invoke in the wrapper export"},"configuration":{"type":"object","description":"Wrapper-specific configuration payload","additionalProperties":true}}},"Parsers":{"type":"array","description":"Configuration for parsing XML payloads (i.e. files, HTTP responses, etc.). Use this field when you need to process XML data\nand transform it into a JSON records.\n\n**Implementation notes**\n\n- This is where you configure how to parse XML data in your resource\n- Although defined as an array, you typically only need a single parser configuration\n- Currently only XML parsing is supported\n- Only configure this field when working with XML data that needs structured parsing\n","items":{"type":"object","properties":{"version":{"type":"string","description":"Version identifier for the parser configuration format. Currently only version \"1\" is supported.\n\nAlways set this field to \"1\" as it's the only supported version at this time.\n","enum":["1"]},"type":{"type":"string","description":"Defines the type of parser to use. Currently only \"xml\" is supported.\n\nWhile the system is designed to potentially support multiple parser types in the future,\nat this time only XML parsing is implemented, so this field must be set to \"xml\".\n","enum":["xml"]},"name":{"type":"string","description":"Optional identifier for the parser configuration. This field is primarily for documentation\npurposes and is not functionally used by the system.\n\nThis field can be omitted in most cases as it's not required for parser functionality.\n"},"rules":{"type":"object","description":"Configuration rules that determine how XML data is parsed and converted to JSON.\nThese settings control the structure and format of the resulting JSON records.\n\n**Parsing options**\n\nThere are two main parsing strategies available:\n- **Automatic parsing**: Simple but produces more complex output\n- **Custom parsing**: More control over the resulting JSON structure\n","properties":{"V0_json":{"type":"boolean","description":"Controls the XML parsing strategy.\n\n- When set to **true** (Automatic): XML data is automatically converted to JSON without\n  additional configuration. This is simpler to set up but typically produces more complex\n  and deeply nested JSON that may be harder to work with.\n\n- When set to **false** (Custom): Gives you more control over how the XML is converted to JSON.\n  This requires additional configuration (like listNodes) but produces cleaner, more\n  predictable JSON output.\n\nMost implementations use the Custom approach (false) for better control over the output format.\n"},"listNodes":{"type":"array","description":"Specifies which XML nodes should be treated as arrays (lists) in the output JSON.\n\nIt's not always possible to automatically determine if an XML node should be a single value\nor an array. Use this field to explicitly identify nodes that should be treated as arrays,\neven if they appear only once in the XML.\n\nEach entry should be a simplified XPath expression pointing to the node that should be\ntreated as an array.\n\nOnly relevant when V0_json is set to false (Custom parsing).\n","items":{"type":"string"}},"includeNodes":{"type":"array","description":"Limits which XML nodes are included in the output JSON.\n\nFor large XML documents, you can use this field to extract only the nodes you need,\nreducing the size and complexity of the resulting JSON. Only nodes specified here\n(and their children) will be included in the output.\n\nEach entry should be a simplified XPath expression pointing to nodes to include.\n\nOnly relevant when V0_json is set to false (Custom parsing).\n","items":{"type":"string"}},"excludeNodes":{"type":"array","description":"Specifies which XML nodes should be excluded from the output JSON.\n\nSometimes it's easier to specify which nodes to exclude rather than which to include.\nUse this field to identify nodes that should be omitted from the output JSON.\n\nEach entry should be a simplified XPath expression pointing to nodes to exclude.\n\nOnly relevant when V0_json is set to false (Custom parsing).\n","items":{"type":"string"}},"stripNewLineChars":{"type":"boolean","description":"Controls whether newline characters are removed from text values.\n\nWhen set to true, all newline characters (\\n, \\r, etc.) will be removed from\ntext content in the XML before conversion to JSON.\n","default":false},"trimSpaces":{"type":"boolean","description":"Controls whether leading and trailing whitespace is trimmed from text values.\n\nWhen set to true, all values will have leading and trailing whitespace removed\nbefore conversion to JSON.\n","default":false},"attributePrefix":{"type":"string","description":"Specifies a character sequence to prepend to XML attribute names when converted to JSON properties.\n\nIn XML, both elements and attributes can exist at the same level, but in JSON this distinction is lost.\nTo maintain the distinction between element data and attribute data in the resulting JSON, this prefix\nis added to attribute names during conversion.\n\nFor example, with attributePrefix set to \"Att-\" and an XML element like:\n```xml\n<product id=\"123\">Laptop</product>\n```\n\nThe resulting JSON would be:\n```json\n{\n  \"product\": \"Laptop\",\n  \"Att-id\": \"123\"\n}\n```\n\nThis helps maintain the distinction between element content and attribute values in the\nconverted JSON, making it easier to reference specific data in downstream processing steps.\n"},"textNodeName":{"type":"string","description":"Specifies the property name to use for element text content when an element has both\ntext content and child elements or attributes.\n\nWhen an XML element contains both text content and other nested elements or attributes,\nthis field determines what property name will hold the text content in the resulting JSON.\n\nFor example, with textNodeName set to \"value\" and an XML element like:\n```xml\n<item id=\"123\">\n  Laptop\n  <category>Electronics</category>\n</item>\n```\n\nThe resulting JSON would be:\n```json\n{\n  \"item\": {\n    \"value\": \"Laptop\",\n    \"category\": \"Electronics\",\n    \"id\": \"123\"\n  }\n}\n```\n\nThis allows for unambiguous parsing of complex XML structures that mix text content with\nchild elements. Choose a name that's unlikely to conflict with actual element names in your XML.\n"}}}}}},"MockOutput":{"type":"object","description":"Sample data that simulates the output from an export for testing and configuration purposes.\n\nMock output allows you to configure and test flows without executing the actual export or\nwaiting for real-time data to arrive. This is particularly useful for:\n- Initial flow configuration and testing\n- Mapping development without requiring live data\n- Generating metadata for downstream flow steps\n- Creating realistic test scenarios\n- Documenting expected data structures\n\n**Structure**\n\nThe mock output must follow the integrator.io canonical format, which consists of a\n`page_of_records` array containing record objects. Each record object has a `record`\nproperty that contains the actual data fields.\n\n```json\n{\n  \"page_of_records\": [\n    {\n      \"record\": {\n        \"field1\": \"value1\",\n        \"field2\": \"value2\",\n        ...\n      }\n    },\n    ...\n  ]\n}\n```\n\n**Usage**\n\nWhen executing a test run or configuring a flow, integrator.io will use this mock output\ninstead of executing the export to retrieve live data. This allows you to:\n- Test mappings with representative data\n- Configure downstream flow steps without waiting for real data\n- Simulate various data scenarios\n\n**Limitations**\n\n- Maximum of 10 records\n- Maximum size of 1 MB\n- Must follow the canonical format shown above\n\nMock output can be populated automatically from preview data or entered manually.\n","properties":{"page_of_records":{"type":"array","description":"Array of record objects in the integrator.io canonical format.\n\nEach item in this array represents one record that would be processed\nby the flow during execution.\n","items":{"type":"object","properties":{"record":{"type":"object","description":"Container for the actual record data fields.\n\nThe structure of this object will vary depending on the specific\nexport configuration and the source system's data structure.\n","additionalProperties":true}}}}}},"Response":{"type":"object","description":"Response schema for tool operations.\n\nContains the complete tool configuration including metadata, input/output\nsettings, routing logic, and AI-generated descriptions.\n","properties":{"_id":{"type":"string","format":"objectId","readOnly":true,"description":"Unique identifier for the tool"},"name":{"type":"string","description":"Human-readable name for the tool"},"description":{"type":"string","description":"Detailed description of the tool's purpose"},"_integrationId":{"type":"string","format":"objectId","description":"Reference to the parent integration"},"input":{"$ref":"#/components/schemas/Input"},"output":{"$ref":"#/components/schemas/Output"},"routers":{"type":"array","description":"Routing configuration for conditional processing.\n\nOnly present when the tool has routers configured.\n","items":{"$ref":"#/components/schemas/Router"}},"aiDescription":{"$ref":"#/components/schemas/AIDescription"},"createdAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the tool was created"},"lastModified":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the tool was last modified"},"deletedAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the tool was soft-deleted.\n\nOnly present for deleted tools. The tool will be permanently\nremoved 30 days after this timestamp.\n"}}},"Input":{"type":"object","description":"Configuration for the tool's input processing.\n\nDefines the expected input structure, optional transformations to apply\nbefore routing, and mock data for testing.\n","properties":{"name":{"type":"string","maxLength":200,"description":"Display name for the input configuration.\n"},"description":{"type":"string","maxLength":10240,"description":"Description of the expected input data and its purpose.\n"},"schema":{"type":"object","description":"JSON Schema describing the expected input data structure.\n\nUsed for validation, documentation, and AI-assisted tooling.\nMust be a valid JSON Schema document.\n","additionalProperties":true},"transform":{"$ref":"#/components/schemas/Transform"},"mockInput":{"type":"object","description":"Mock data for testing the tool's input processing.\n\nProvides sample input to test transformation logic and routing\nwithout requiring live data. Maximum size: 1MB.\n","additionalProperties":true}}},"Output":{"type":"object","description":"Configuration for the tool's output processing.\n\nDefines how the tool's results are mapped, transformed, and enriched\nbefore being returned. Supports field mappings, lookups for data\nenrichment, and custom script hooks for pre/post-mapping processing.\n","properties":{"name":{"type":"string","maxLength":200,"description":"Display name for the output configuration.\n"},"description":{"type":"string","maxLength":10240,"description":"Description of the output data and its purpose.\n"},"schema":{"type":"object","description":"JSON Schema describing the output data structure.\n\nUsed for documentation and validation of the tool's output.\nMust be a valid JSON Schema document.\n","additionalProperties":true},"mappings":{"type":"array","description":"Field mappings to transform data into the output format.\n\nMaps data from processing results to the output structure.\nUses Celigo's standard mapping format with extract/generate field paths.\n","items":{"$ref":"#/components/schemas/Mappings"}},"lookups":{"type":"array","description":"Lookup tables for data enrichment during output processing.\n\nStatic key-value mappings used to translate values (e.g., status codes,\ncategory names) during output generation.\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Name of the lookup, used to reference it from mappings.\n"},"map":{"type":"object","description":"Key-value mapping object. Keys are the input values and\nvalues are the corresponding output values.\n","additionalProperties":true},"default":{"type":"string","description":"Default value returned when the input key is not found in the map.\n"},"allowFailures":{"type":"boolean","description":"Whether to continue processing if the lookup fails to find a match\nand no default is provided.\n"}},"required":["name"]}},"hooks":{"type":"object","description":"Custom script hooks for pre- and post-mapping processing.\n\nAllows running custom JavaScript functions before and after\noutput mappings are applied.\n","properties":{"preMap":{"type":"object","description":"Script to run before applying output mappings.\n\nCan modify the data before it is mapped to the output structure.\n","properties":{"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource"},"function":{"type":"string","description":"Function name to execute within the script"}}},"postMap":{"type":"object","description":"Script to run after applying output mappings.\n\nCan modify the final output data after mappings are applied.\n","properties":{"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource"},"function":{"type":"string","description":"Function name to execute within the script"}}}}},"mockInput":{"type":"object","description":"Mock data for testing the tool's output processing.\n\nProvides sample data that would arrive from the routing/processing\nstage, used to test mapping and lookup logic. Maximum size: 1MB.\n","additionalProperties":true}}},"Router":{"type":"object","description":"Configuration for conditional routing within a tool.\n\nRouters evaluate input data and direct it to different processing branches\nbased on criteria. This enables complex business logic and conditional\nprocessing within the tool.\n\nUnlike flows, tools only support \"first_matching_branch\" routing strategy.\nBranches can chain to other routers or use the special \"outputRouter\"\nterminal sink to exit the tool and return results.\n","properties":{"id":{"type":"string","description":"Unique identifier for this router within the tool.\n\nUsed to reference this router from other routers' branch `nextRouterId`.\n"},"name":{"type":"string","maxLength":300,"description":"Human-readable name for the router.\n"},"routeRecordsTo":{"type":"string","enum":["first_matching_branch"],"description":"Routing strategy. Tools only support \"first_matching_branch\",\nwhich routes to the first branch whose criteria match the input.\n"},"routeRecordsUsing":{"type":"string","enum":["input_filters","script"],"description":"Method used to evaluate routing criteria.\n\n- **input_filters**: Use declarative filter expressions on each branch\n- **script**: Use a custom JavaScript function to determine the branch\n"},"script":{"type":"object","description":"Script configuration when routeRecordsUsing is \"script\".\n\nThe function should return the name of the branch to route to.\n","properties":{"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource"},"function":{"type":"string","description":"Function name that returns the branch name"}}},"branches":{"type":"array","description":"List of branches defining different processing paths.\n\nEach branch has optional filter criteria and a set of processing steps.\nRecords are evaluated against branch criteria in order; the first\nmatching branch is selected.\n","items":{"type":"object","properties":{"name":{"type":"string","maxLength":300,"description":"Name of this branch.\n"},"description":{"type":"string","maxLength":10240,"description":"Description of when and why this branch is selected.\n"},"inputFilter":{"type":"object","description":"Filter criteria to determine if this branch should be selected.\n\nUses Celigo's expression-based filter format.\n","properties":{"version":{"type":"string","enum":["1"],"description":"Filter version"},"rules":{"type":"array","description":"Filter rules in Celigo expression-based filter format.\n\nArray-based DSL where the first element is an operator (e.g., \"equals\", \"and\", \"or\"),\nfollowed by operands which can be nested expressions.\n","items":{}}}},"nextRouterId":{"type":"string","description":"Identifier of the next router to chain to after this branch completes.\n\nUse \"outputRouter\" as a special terminal value to exit the tool\nand return the processing results.\n"},"pageProcessors":{"type":"array","description":"Processing steps to execute in this branch.\n\nEach processor references an export (lookup) or import resource\nfor data retrieval or submission.\n","items":{"type":"object","properties":{"type":{"type":"string","enum":["export","import"],"description":"Type of processor.\n\n- **export**: Retrieves data from an external system (lookup)\n- **import**: Sends data to an external system\n"},"_exportId":{"type":"string","format":"objectId","description":"Export resource reference (when type is \"export\")"},"_importId":{"type":"string","format":"objectId","description":"Import resource reference (when type is \"import\")"},"proceedOnFailure":{"type":"boolean","description":"Whether to continue processing subsequent steps if this\nprocessor fails.\n"},"responseMapping":{"type":"object","description":"Mapping configuration for the processor's response data.\n","properties":{"fields":{"type":"array","description":"Simple field-level mappings","items":{"type":"object","properties":{"extract":{"type":"string","description":"Source field path from the response"},"generate":{"type":"string","description":"Target field path in the record"}}}},"lists":{"type":"array","description":"List-level mappings for array data","items":{"type":"object","properties":{"generate":{"type":"string","description":"Target list path"},"fields":{"type":"array","items":{"type":"object","properties":{"extract":{"type":"string","description":"Source field path"},"generate":{"type":"string","description":"Target field path"}}}}}}}}},"hooks":{"type":"object","description":"Custom scripts for processing","properties":{"postResponseMap":{"type":"object","description":"Script to run after response mapping","properties":{"_scriptId":{"type":"string","format":"objectId","description":"Reference to the script resource"},"function":{"type":"string","description":"Function name to execute"}}}}}}}}}}}},"required":["id","branches"]},"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"404-not-found":{"description":"Not found. The requested resource does not exist or is not visible to the caller.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}}},"paths":{"/v1/tools/{_id}/descendants":{"get":{"summary":"List resources a tool depends on, grouped by type","operationId":"listToolDescendants","tags":["Tools"],"description":"Returns the full dependency tree of a tool as three arrays: the\n`imports`, `exports`, and nested `tools` it references directly or\ntransitively. Each entry is the complete resource document (not just\nan id), so the caller doesn't need to fan out individual GETs.\n\nReturns `200` with `{imports:[], exports:[], tools:[]}` when the tool\nhas no dependencies.\n\nAI guidance:\n- Pair with `GET /v1/tools/{_id}/connections` to enumerate the full\n  resource + connection footprint in two calls.\n- Useful for impact analysis (what breaks if this import changes?),\n  template/clone previews, and orphaned-resource detection.","parameters":[{"name":"_id","in":"path","required":true,"description":"Tool id.","schema":{"type":"string","format":"objectId"}}],"responses":{"200":{"description":"Full descendant resource docs grouped by type. Each array may be\nempty when the tool doesn't reference that resource kind.","content":{"application/json":{"schema":{"type":"object","properties":{"imports":{"type":"array","description":"Full Import documents the tool depends on.","items":{"$ref":"#/components/schemas/Response-3"}},"exports":{"type":"array","description":"Full Export documents the tool depends on.","items":{"$ref":"#/components/schemas/Response-4"}},"tools":{"type":"array","description":"Full inner Tool documents nested beneath this tool.","items":{"$ref":"#/components/schemas/Response"}}}}}}},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"$ref":"#/components/responses/404-not-found"}}}}}}
````

## List dependencies of a tool

> Returns the set of resources that depend on the specified resource.\
> The response is an object whose keys are dependent-resource types\
> (e.g. \`flows\`, \`imports\`) and whose values are arrays of dependency\
> entries.\
> \
> AI guidance:\
> \- An empty object \`{}\` means no other resources depend on the target.\
> &#x20; This is also returned for a well-formatted but nonexistent id.

```json
{"openapi":"3.1.0","info":{"title":"Tools","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"DependencyResponse":{"type":"object","description":"Map of dependent-resource types to arrays of dependency entries.\nKeys are plural resource type strings (e.g. `flows`, `imports`,\n`connections`). An empty object `{}` means no dependents.\n","additionalProperties":{"type":"array","items":{"$ref":"#/components/schemas/DependencyEntry"}}},"DependencyEntry":{"type":"object","description":"A single resource that depends on the queried resource.","properties":{"id":{"type":"string","description":"Unique identifier of the dependent resource."},"name":{"type":"string","description":"Display name of the dependent resource."},"paths":{"type":"array","description":"JSON-path-style pointers within the dependent resource's document\nthat reference the target resource.\n","items":{"type":"string"}},"accessLevel":{"type":"string","description":"The caller's access level on the dependent resource."},"dependencyIds":{"type":"object","description":"Map of resource types to arrays of ids that this dependent\nresource references on the target. Keys are singular or plural\nresource type strings; values are arrays of id strings.\n","additionalProperties":{"type":"array","items":{"type":"string"}}}},"required":["id","name","paths","accessLevel","dependencyIds"]}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}}}},"paths":{"/v1/tools/{_id}/dependencies":{"get":{"operationId":"listToolDependencies","tags":["Tools"],"summary":"List dependencies of a tool","description":"Returns the set of resources that depend on the specified resource.\nThe response is an object whose keys are dependent-resource types\n(e.g. `flows`, `imports`) and whose values are arrays of dependency\nentries.\n\nAI guidance:\n- An empty object `{}` means no other resources depend on the target.\n  This is also returned for a well-formatted but nonexistent id.","parameters":[{"name":"_id","in":"path","required":true,"description":"Resource ID.","schema":{"type":"string","format":"objectId"}}],"responses":{"200":{"description":"Dependency map. Keys are resource-type strings; values are arrays\nof dependency entries. Returns `{}` when no dependents exist.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/DependencyResponse"}}}},"401":{"$ref":"#/components/responses/401-unauthorized"}}}}}}
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://developer.celigo.com/api/api-reference/tools.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
